settingsLogin | Registersettings

Questions in openstack

Search:
CategoryQuestionAnswer

Re: [Openstack] external router interface is DOWN

On Tue, Aug 5, 2014 at 11:21 AM, Aru s arumon82@gmail.com wrote:

Hi All,

I can see there is lot of people facing the same kind of issue and no body
have a solution. Please help me to understand whether it is a bug or the
configuration issue.

Regards,
Arumon

Hi Arumon,
You'll find lots of people with similar issues on ask.openstack.org and
often it's a configuration issue.

https://ask.openstack.org/en/questions/scope:all/sort:activity-desc/page:1/query:external%20router%20interface%20down/

Hope this helps -
Anne

On Mon, Aug 4, 2014 at 11:02 PM, Aru s arumon82@gmail.com wrote:

Hi All,

I am trying openstack icehouse on rhel 6.4 on virtual box. All went good
except the external router interface shows as DOWN status. Hence the vm's
external communication is not working. Please help to troubleshoot.

Regards.
Arumon

On Mon, Aug 4, 2014 at 10:58 PM, Aru s arumon82@gmail.com wrote:

Hi All,

I am trying openstack icehouse on rhel 6.4 on virtual box. All went good
except the external router interface shows as DOWN status. Hence the vm's
external communication is not working. Please help to troubleshoot.

Regards.
Arumon


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Thanks Anne. I got some idea from the url you provided. Will try the
options and update.

Regards,
Arumon

On Tue, Aug 5, 2014 at 11:55 PM, Anne Gentle anne@openstack.org wrote:

On Tue, Aug 5, 2014 at 11:21 AM, Aru s arumon82@gmail.com wrote:

Hi All,

I can see there is lot of people facing the same kind of issue and no
body have a solution. Please help me to understand whether it is a bug or
the configuration issue.

Regards,
Arumon

Hi Arumon,
You'll find lots of people with similar issues on ask.openstack.org and
often it's a configuration issue.

https://ask.openstack.org/en/questions/scope:all/sort:activity-desc/page:1/query:external%20router%20interface%20down/

Hope this helps -
Anne

On Mon, Aug 4, 2014 at 11:02 PM, Aru s arumon82@gmail.com wrote:

Hi All,

I am trying openstack icehouse on rhel 6.4 on virtual box. All went good
except the external router interface shows as DOWN status. Hence the vm's
external communication is not working. Please help to troubleshoot.

Regards.
Arumon

On Mon, Aug 4, 2014 at 10:58 PM, Aru s arumon82@gmail.com wrote:

Hi All,

I am trying openstack icehouse on rhel 6.4 on virtual box. All went
good except the external router interface shows as DOWN status. Hence the
vm's external communication is not working. Please help to troubleshoot.

Regards.
Arumon


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Getting size of volume_type using API

Is there a way to get the size of a volumetype using an API.
According to the docs
http://developer.openstack.org/api-ref-compute-v2-ext.html ,
GET /v1.1/​{tenant
id}​/os-volume-types/​{volumetypeid}​
returns the following ( but not the size ).

{
"volumetype": {
"id": "289da7f8-6440-407c-9fb4-7db01ec49164",
"name": "vol-type-001",
"extra
specs": {
"capabilities": "gpu"
}
}
}

Sekhar Vajjhala


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

No, there is no way at this point of time. Also, it is little bit difficult
expectation because we may attach more than one physical devices (say HP
3PAR CPG) to same volume type.

On Wed, Aug 6, 2014 at 1:31 AM, Sekhar Vajjhala sekharv01@gmail.com wrote:

Is there a way to get the size of a volumetype using an API.
According to the docs
http://developer.openstack.org/api-ref-compute-v2-ext.html ,
GET /v1.1/{tenant
id}/os-volume-types/{volumetypeid}
returns the following ( but not the size ).

{
"volumetype": {
"id": "289da7f8-6440-407c-9fb4-7db01ec49164",
"name": "vol-type-001",
"extra
specs": {
"capabilities": "gpu"
}
}
}

Sekhar Vajjhala


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Heat-IceHouse, stack creation fails

Hi,
I am trying Heat - IceHouse but I cannot even create a tiny VM as explained in
the documents
(http://docs.openstack.org/icehouse/install-guide/install/apt/content/heat-verify.html).
In Heat-engine log file, I saw that "stack-user-domain" ID is not set in the
heat.conf file. So I modified heat.conf and added admin ID (the one defined
when Keystone was installed and configured) and its name and password for
stackuserdomain, stackdomainadmin, and stackdomainadmin_password,
respectively. But still stack creation fails and I see in the logs that
"ClientException: The server has either erred or is incapable of performing the
requested operation".

What should be set in heat.conf as stack-user-domain and other corresponding
variables? should I create a new domain for Heat and how?

Many thanks,
Parisa


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 06/08/14 09:25, Parisa Heidari wrote:
Hi,
I am trying Heat - IceHouse but I cannot even create a tiny VM as explained in
the documents
(http://docs.openstack.org/icehouse/install-guide/install/apt/content/heat-verify.html).
In Heat-engine log file, I saw that "stack-user-domain" ID is not set in the
heat.conf file. So I modified heat.conf and added admin ID (the one defined
when Keystone was installed and configured) and its name and password for
stackuserdomain, stackdomainadmin, and stackdomainadmin_password,
respectively. But still stack creation fails and I see in the logs that
"ClientException: The server has either erred or is incapable of performing the
requested operation".

What should be set in heat.conf as stack-user-domain and other corresponding
variables? should I create a new domain for Heat and how?

Many thanks,
Parisa

It looks like you need to run the heat-keystone-setup-domain script and
copy the resulting snippet into heat.conf


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] Custom Nova Scheduler Weigher

For what it is worth...I was able to code a weigher. By dropping the code
below into a .py file in the weights directory and restarting the
scheduler, the scheduler will automatically consider if instances in the
same group are on a node before placing another instance there. For what it
is worth, both the ram weigher and the antiAffinity weigher are on the
playing field and so I have set "one node in the same group= 100GB of ram".

Thanks,
Danny Beutler

"""
AntiAffinityWeigher. Weigh hosts by whether or not they have another host
in the same group.

"""

from oslo.config import cfg

from nova import db
from nova.openstack.common.gettextutils import _
from nova.scheduler import weights
from nova.scheduler import filters
from nova.openstack.common import log as logging

LOG = logging.getLogger(name)

antiaffinityweightopts = [
cfg.FloatOpt('antiAffinityWeigher
Multiplier',
default=1.0,
help='Multiplier used for weighing hosts. Negative '
'numbers mean to spread vs stack.'),
]

CONF = cfg.CONF
CONF.registeropts(antiaffinityweightopts)

class AntiAffinityWeigher(weights.BaseHostWeigher):
def weightmultiplier(self):
"""Override the weight multiplier."""
return CONF.antiAffinityWeigher_Multiplier

def _weigh_object(self, host_state, weight_properties):
    group_hosts = weight_properties.get('group_hosts') or []
    LOG.debug(_("Group anti affinity weigher: check if %(host)s not "
                "in %(configured)s"), {'host': host_state.host,
                                       'configured': group_hosts})
    if group_hosts:
        group_hosts = weight_properties.get('group_hosts') or []
        num_instances_in_group = group_hosts.count(host_state.host)
        LOG.debug(_("Number of instances in the same group on this node

%(host)s"), {'host': numinstancesingroup})
return group
hosts.count(host_state.host) * -100000

    # No groups configured
    return 0

On Mon, Aug 4, 2014 at 12:14 PM, Dugger, Donald D <donald.d.dugger@intel.com

wrote:

Danny-

People have been thinking about affinity/anti-affinity scheduling so this
is a good area to look at but we might want to think of a general approach
that addresses this and other issues. I know that Yathi & Debo have
proposed a BP:

https://blueprints.launchpad.net/nova/+spec/solver-scheduler

you might want to check it out and see how it relates to your issues.

--

Don Dugger

"Censeo Toto nos in Kansa esse decisse." - D. Gale

Ph: 303/443-3786

From: Danny Beutler [mailto:dannybeutler@gmail.com]
Sent: Friday, August 1, 2014 10:47 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Custom Nova Scheduler Weigher

I am in the process of implementing a custom weigher class. I have created
a weigher that prefers hosts which do not have other instances in the same
group (think GroupAntiAffinityFilter but for weight).

Here is the code for the class:

Copyright (c) 2011 OpenStack Foundation

All Rights Reserved.

#

Licensed under the Apache License, Version 2.0 (the "License"); you

may

not use this file except in compliance with the License. You may

obtain

a copy of the License at

#

http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT

WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See

the

License for the specific language governing permissions and

limitations

"""
AntiAffinityWeigher. Weigh hosts by whether or not they have another host
in the same group.

"""

from oslo.config import cfg

from nova import db
from nova.openstack.common.gettextutils import _
from nova.scheduler import weights
from nova.scheduler import filters
from nova.openstack.common import log as logging

LOG = logging.getLogger(name)

antiaffinityweightopts = [
cfg.FloatOpt('antiAffinityWeigher
Multiplier',
default=1000.0,
help='Multiplier used for weighing hosts. Negative '
'numbers mean to stack vs spread.'),
]

CONF = cfg.CONF
CONF.registeropts(antiaffinityweightopts)

class AntiAffinityWeigher(weights.BaseHostWeigher):
def weightmultiplier(self):
"""Override the weight multiplier."""
return CONF.antiAffinityWeigher_Multiplier

def _weigh_object(self, host_state, weight_properties):
    group_hosts = weight_properties.get('group_hosts') or []
    LOG.debug(_("Group anti affinity Weigher: check if %(host)s not "
        "in %(configured)s"), {'host': host_state.host,
        'configured': group_hosts})
    if group_hosts:
        return group_hosts.amount() * 100000

    # No groups configured
    return 0

I know the python is at least close to correct because the scheduler
service wouldn't even restart until it was. After I got the bugs worked out
of the module, I added modified the /etc/nova/nova.conf file to have the
custom weigher like so:

schedulerweightclasses=nova.scheduler.weights.all_weighers,nova.scheduler.AntiAffinityWeigher

After restarting the scheduler service I get the following error in the
nova logs:
<178>Aug 1 16:46:11 node-25 nova-nova CRITICAL: Class AntiAffinityWeigher
cannot be found (['Traceback (most recent call last):\n', ' File
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py",
line 31, in importclass\n return getattr(sys.modules[modstr],
classstr)\n', "AttributeError: 'module' object has no attribute
'AntiAffinityWeigher'\n"])
Traceback (most recent call last):
File "/usr/bin/nova-scheduler", line 10, in
sys.exit(main())
File "/usr/lib/python2.6/site-packages/nova/cmd/scheduler.py", line 39,
in main
topic=CONF.scheduler
topic)
File "/usr/lib/python2.6/site-packages/nova/service.py", line 257, in
create
dballowed=dballowed)
File "/usr/lib/python2.6/site-packages/nova/service.py", line 139, in
init
self.manager = manager_class(host=self.host, *args, **kwargs)
File "/usr/lib/python2.6/site-packages/nova/scheduler/manager.py", line
65, in __init__
self.driver = importutils.import_object(scheduler_driver)
File
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py",
line 40, in import_object
return import_class(import_str)(*args, **kwargs)
File
"/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py", line
59, in __init__
super(FilterScheduler, self).__init__(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/nova/scheduler/driver.py", line
103, in __init__
CONF.scheduler_host_manager)
File
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py",
line 40, in import_object
return import_class(import_str)(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/nova/scheduler/host_manager.py",
line 297, in __init__
CONF.scheduler_weight_classes)
File "/usr/lib/python2.6/site-packages/nova/loadables.py", line 105, in
get_matching_classes
obj = importutils.import_class(cls_name)
File
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py",
line 35, in import_class
traceback.format_exception(*sys.exc_info())))
ImportError: Class AntiAffinityWeigher cannot be found (['Traceback (most
recent call last):\n', ' File
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py",
line 31, in import_class\n return getattr(sys.modules[mod_str],
class_str)\n', "AttributeError: 'module' object has no attribute
'AntiAffinityWeigher'\n"])

I have also tried a few different naming conventions such as
"AntiAffinityWeigher.AntiAffinityWeigher" and
"myWeigher.AntiAffinityWeigher" to no avail.

Any help would be greatly appreciated.

Thanks,
Danny


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] Dashboard Error

What file does the output reside?
Does your keystone work?
What's your configuration files look like?
And what exactly have you done before your had this error?

Tao

On 08/05/2014 03:41 PM, Sujeet Mulmi wrote:

Hi Zhou,

with controller IP, following output results:

[Tue Aug 05 07:25:45 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.101
[Tue Aug 05 07:25:45 2014] [error] DEBUG:urllib3.connectionpool:"POST
/v2.0/tokens HTTP/1.1" 200 1349
[Tue Aug 05 07:25:45 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.101
[Tue Aug 05 07:25:45 2014] [error] DEBUG:urllib3.connectionpool:"GET
/v2.0/tenants HTTP/1.1" 200 143
[Tue Aug 05 07:25:45 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.101
[Tue Aug 05 07:25:45 2014] [error] DEBUG:urllib3.connectionpool:"POST
/v2.0/tokens HTTP/1.1" 200 3967
[Tue Aug 05 07:25:45 2014] [error] Login successful for user "admin".
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error] REQ: curl -i
'http://192.168.137.101:8774/v2/121eae2f2f664dc49265ad11229022c3/extensions'
-X GET -H "X-Auth-Project-Id: 121eae2f2f664dc49265ad11229022c3" -H
"User-Agent: python-novaclient" -H "Accept: application/json" -H
"X-Auth-Token: 2b44d3ef0b882019cad7e470cc5cc213"
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.101
[Tue Aug 05 07:25:46 2014] [error] DEBUG:urllib3.connectionpool:"GET
/v2/121eae2f2f664dc49265ad11229022c3/extensions HTTP/1.1" 401 23
[Tue Aug 05 07:25:46 2014] [error] RESP: [401] {'date': 'Tue, 05 Aug
2014 07:25:46 GMT', 'content-length': '23', 'content-type':
'text/plain', 'www-authenticate': "Keystone
uri='http://192.168.168.101:5000/v2.0'"}
[Tue Aug 05 07:25:46 2014] [error] RESP BODY: Authentication required
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error] REQ: curl -i
'http://192.168.137.101:8774/v2/121eae2f2f664dc49265ad11229022c3' -X
GET -H "X-Auth-Project-Id: 121eae2f2f664dc49265ad11229022c3" -H
"X-Auth-Key: 2b44d3ef0b882019cad7e470cc5cc213" -H "Accept:
application/json" -H "X-Auth-User: admin" -H "User-Agent:
python-novaclient"
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.101
[Tue Aug 05 07:25:46 2014] [error] DEBUG:urllib3.connectionpool:"GET
/v2/121eae2f2f664dc49265ad11229022c3 HTTP/1.1" 401 23
[Tue Aug 05 07:25:46 2014] [error] RESP: [401] {'date': 'Tue, 05 Aug
2014 07:25:46 GMT', 'content-length': '23', 'content-type':
'text/plain', 'www-authenticate': "Keystone
uri='http://192.168.168.101:5000/v2.0'"}
[Tue Aug 05 07:25:46 2014] [error] RESP BODY: Authentication required
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error] Internal Server Error:
/dashboard/admin/
[Tue Aug 05 07:25:46 2014] [error] Traceback (most recent call last):
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line
109, in getresponse
[Tue Aug 05 07:25:46 2014] [error] response = callback(request,
*callback
args, **callbackkwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 38, in dec
[Tue Aug 05 07:25:46 2014] [error] return view
func(request,
args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 86, in dec
[Tue Aug 05 07:25:46 2014] [error] return viewfunc(request,
*args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 54, in dec
[Tue Aug 05 07:25:46 2014] [error] return view
func(request,
*args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 38, in dec
[Tue Aug 05 07:25:46 2014] [error] return view_func(request,
*args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 86, in dec
[Tue Aug 05 07:25:46 2014] [error] return view_func(request,
*args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/django/views/generic/base.py", line
48, in view
[Tue Aug 05 07:25:46 2014] [error] return self.dispatch(request,
*args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/django/views/generic/base.py", line
69, in dispatch
[Tue Aug 05 07:25:46 2014] [error] return handler(request, *args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/tables/views.py", line 154,
in get
[Tue Aug 05 07:25:46 2014] [error] handled = self.construct_tables()
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/tables/views.py", line 145,
in construct_tables
[Tue Aug 05 07:25:46 2014] [error] handled = self.handle_table(table)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/tables/views.py", line 118,
in handle_table
[Tue Aug 05 07:25:46 2014] [error] data = self._get_data_dict()
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/tables/views.py", line 181,
in _get_data_dict
[Tue Aug 05 07:25:46 2014] [error] self._data =
{self.table_class._meta.name http://meta.name: self.get_data()}
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/overview/views.py",
line 60, in get_data
[Tue Aug 05 07:25:46 2014] [error] data = super(GlobalOverview,
self).get_data()
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/views.py",
line 43, in get_data
[Tue Aug 05 07:25:46 2014] [error]
self.usage.summarize(
self.usage.getdaterange())
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstackdashboard/wsgi/../../openstackdashboard/usage/base.py",
line 200, in summarize
[Tue Aug 05 07:25:46 2014] [error] if not
api.nova.extensionsupported('SimpleTenantUsage', self.request):
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/utils/memoized.py", line 90,
in wrapped
[Tue Aug 05 07:25:46 2014] [error] value = cache[key] =
func(*args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstack
dashboard/wsgi/../../openstackdashboard/api/nova.py",
line 752, in extension
supported
[Tue Aug 05 07:25:46 2014] [error] extensions =
listextensions(request)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/utils/memoized.py", line 90,
in wrapped
[Tue Aug 05 07:25:46 2014] [error] value = cache[key] =
func(*args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstack
dashboard/wsgi/../../openstackdashboard/api/nova.py",
line 743, in list
extensions
[Tue Aug 05 07:25:46 2014] [error] return
novalistextensions.ListExtManager(novaclient(request)).showall()
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/novaclient/v1
1/contrib/listextensions.py",
line 37, in show
all
[Tue Aug 05 07:25:46 2014] [error] return
self.list("/extensions", 'extensions')
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/novaclient/base.py", line 64, in _list
[Tue Aug 05 07:25:46 2014] [error] _resp, body =
self.api.client.get(url)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 309, in get
[Tue Aug 05 07:25:46 2014] [error] return self.
csrequest(url,
'GET', **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 301, in
_cs
request
[Tue Aug 05 07:25:46 2014] [error] raise e
[Tue Aug 05 07:25:46 2014] [error] Unauthorized: Unauthorized (HTTP 401)

On Tue, Aug 5, 2014 at 11:33 AM, ZHOU TAO A
<tao.a.zhou@alcatel-lucent.com tao.a.zhou@alcatel-lucent.com>
wrote:

You have the wrong endpoints:


| 791fecb5eb414f2ebf4ebb0700e8f847 | regionOne |
http://192.168.137.102:8774/v2/%(tenant_id)s
<http://192.168.137.102:8774/v2/%%28tenant_id%29s> |
http://192.168.137.102:8774/v2/%(tenant_id)s
<http://192.168.137.102:8774/v2/%%28tenant_id%29s> |
http://192.168.137.102:8774/v2/%(tenant_id)s
<http://192.168.137.102:8774/v2/%%28tenant_id%29s> |

The ip address should be the controller's IP.


On 08/05/2014 11:28 AM, Sujeet Mulmi wrote:
Hi,

   I am trying to bring Dashboard up and ended with following error:

[Tue Aug 05 02:16:42 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.101
[Tue Aug 05 02:16:42 2014] [error]
DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 1349
[Tue Aug 05 02:16:42 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.101
[Tue Aug 05 02:16:42 2014] [error]
DEBUG:urllib3.connectionpool:"GET /v2.0/tenants HTTP/1.1" 200 143
[Tue Aug 05 02:16:42 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.101
[Tue Aug 05 02:16:42 2014] [error]
DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 3967
[Tue Aug 05 02:16:42 2014] [error] Login successful for user "admin".
[Tue Aug 05 02:16:42 2014] [error]
[Tue Aug 05 02:16:42 2014] [error] REQ: curl -i
'http://192.168.137.102:8774/v2/121eae2f2f664dc49265ad11229022c3/extensions'
-X GET -H "X-Auth-Project-Id: 121eae2f2f664dc49265ad11229022c3"
-H "User-Agent: python-novaclient" -H "Accept: application/json"
-H "X-Auth-Token: 352f4915d730d862e28c8ef02a85b029"
[Tue Aug 05 02:16:42 2014] [error]
[Tue Aug 05 02:16:42 2014] [error]
INFO:urllib3.connectionpool:Starting new HTTP connection (1):
192.168.137.102



endpoint is as follows:
[root@controller ~]# keystone endpoint-list
+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+----------------------------------------------+----------------------------------+
|                id                | region  |                
 publicurl         |                 internalurl          |      
            adminurl           |            service_id            |
+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+----------------------------------------------+----------------------------------+
| 791fecb5eb414f2ebf4ebb0700e8f847 | regionOne |
http://192.168.137.102:8774/v2/%(tenant_id)s
<http://192.168.137.102:8774/v2/%%28tenant_id%29s> |
http://192.168.137.102:8774/v2/%(tenant_id)s
<http://192.168.137.102:8774/v2/%%28tenant_id%29s> |
http://192.168.137.102:8774/v2/%(tenant_id)s
<http://192.168.137.102:8774/v2/%%28tenant_id%29s> |
60a3a48fdd2d4cbdaedc2209579b5f7d |
| a12049abee334215bea3308a6e01a43d | regionOne |
http://192.168.137.101:9292          |
http://192.168.137.101:9292          |
http://192.168.137.101:9292          |
79f631a6530746458944d8606eeb9289 |
| f9a297aa9cd54f9d8442b703832a9734 | regionOne |
http://192.168.137.101:5000/v2.0       |
http://192.168.137.101:5000/v2.0       |
http://192.168.137.101:35357/v2.0       |
f124253db62f4c56ac324ec77552dbf1 |
+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+----------------------------------------------+----------------------------------+

where 192.168.137.102 = compute node.
          192.168.137.101 = controller node.

  could anyone suggest how to resovle the issue.

Regards,
Sujeet


_______________________________________________
Mailing list:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     :openstack@lists.openstack.org  <mailto:openstack@lists.openstack.org>
Unsubscribe :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
_______________________________________________
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Last modification was made in dashboard local_settings.Please check the
attached file.

Sujeet

On Wed, Aug 6, 2014 at 6:32 AM, ZHOU TAO A tao.a.zhou@alcatel-lucent.com
wrote:

What file does the output reside?
Does your keystone work?
What's your configuration files look like?
And what exactly have you done before your had this error?

Tao

On 08/05/2014 03:41 PM, Sujeet Mulmi wrote:

Hi Zhou,

with controller IP, following output results:

[Tue Aug 05 07:25:45 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.101
[Tue Aug 05 07:25:45 2014] [error] DEBUG:urllib3.connectionpool:"POST
/v2.0/tokens HTTP/1.1" 200 1349
[Tue Aug 05 07:25:45 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.101
[Tue Aug 05 07:25:45 2014] [error] DEBUG:urllib3.connectionpool:"GET
/v2.0/tenants HTTP/1.1" 200 143
[Tue Aug 05 07:25:45 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.101
[Tue Aug 05 07:25:45 2014] [error] DEBUG:urllib3.connectionpool:"POST
/v2.0/tokens HTTP/1.1" 200 3967
[Tue Aug 05 07:25:45 2014] [error] Login successful for user "admin".
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error] REQ: curl -i '
http://192.168.137.101:8774/v2/121eae2f2f664dc49265ad11229022c3/extensions'
-X GET -H "X-Auth-Project-Id: 121eae2f2f664dc49265ad11229022c3" -H
"User-Agent: python-novaclient" -H "Accept: application/json" -H
"X-Auth-Token: 2b44d3ef0b882019cad7e470cc5cc213"
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.101
[Tue Aug 05 07:25:46 2014] [error] DEBUG:urllib3.connectionpool:"GET
/v2/121eae2f2f664dc49265ad11229022c3/extensions HTTP/1.1" 401 23
[Tue Aug 05 07:25:46 2014] [error] RESP: [401] {'date': 'Tue, 05 Aug 2014
07:25:46 GMT', 'content-length': '23', 'content-type': 'text/plain',
'www-authenticate': "Keystone uri='http://192.168.168.101:5000/v2.0'"}
[Tue Aug 05 07:25:46 2014] [error] RESP BODY: Authentication required
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error] REQ: curl -i '
http://192.168.137.101:8774/v2/121eae2f2f664dc49265ad11229022c3' -X GET
-H "X-Auth-Project-Id: 121eae2f2f664dc49265ad11229022c3" -H "X-Auth-Key:
2b44d3ef0b882019cad7e470cc5cc213" -H "Accept: application/json" -H
"X-Auth-User: admin" -H "User-Agent: python-novaclient"
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.101
[Tue Aug 05 07:25:46 2014] [error] DEBUG:urllib3.connectionpool:"GET
/v2/121eae2f2f664dc49265ad11229022c3 HTTP/1.1" 401 23
[Tue Aug 05 07:25:46 2014] [error] RESP: [401] {'date': 'Tue, 05 Aug 2014
07:25:46 GMT', 'content-length': '23', 'content-type': 'text/plain',
'www-authenticate': "Keystone uri='http://192.168.168.101:5000/v2.0'"}
[Tue Aug 05 07:25:46 2014] [error] RESP BODY: Authentication required
[Tue Aug 05 07:25:46 2014] [error]
[Tue Aug 05 07:25:46 2014] [error] Internal Server Error: /dashboard/admin/
[Tue Aug 05 07:25:46 2014] [error] Traceback (most recent call last):
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 109,
in getresponse
[Tue Aug 05 07:25:46 2014] [error] response = callback(request,
*callback
args, **callbackkwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 38, in dec
[Tue Aug 05 07:25:46 2014] [error] return view
func(request, args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 86, in dec
[Tue Aug 05 07:25:46 2014] [error] return viewfunc(request, *args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 54, in dec
[Tue Aug 05 07:25:46 2014] [error] return view
func(request, *args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 38, in dec
[Tue Aug 05 07:25:46 2014] [error] return view_func(request, *args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/decorators.py", line 86, in dec
[Tue Aug 05 07:25:46 2014] [error] return view_func(request, *args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/django/views/generic/base.py", line 48,
in view
[Tue Aug 05 07:25:46 2014] [error] return self.dispatch(request,
*args, **kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/django/views/generic/base.py", line 69,
in dispatch
[Tue Aug 05 07:25:46 2014] [error] return handler(request, *args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/tables/views.py", line 154, in get
[Tue Aug 05 07:25:46 2014] [error] handled = self.construct_tables()
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/tables/views.py", line 145, in
construct_tables
[Tue Aug 05 07:25:46 2014] [error] handled = self.handle_table(table)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/tables/views.py", line 118, in
handle_table
[Tue Aug 05 07:25:46 2014] [error] data = self._get_data_dict()
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/tables/views.py", line 181, in
_get_data_dict
[Tue Aug 05 07:25:46 2014] [error] self._data = {self.table_class._
meta.name: self.get_data()}
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/overview/views.py",
line 60, in get_data
[Tue Aug 05 07:25:46 2014] [error] data = super(GlobalOverview,
self).get_data()
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/views.py",
line 43, in get_data
[Tue Aug 05 07:25:46 2014] [error]
self.usage.summarize(
self.usage.getdaterange())
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstackdashboard/wsgi/../../openstackdashboard/usage/base.py",
line 200, in summarize
[Tue Aug 05 07:25:46 2014] [error] if not
api.nova.extensionsupported('SimpleTenantUsage', self.request):
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/utils/memoized.py", line 90, in
wrapped
[Tue Aug 05 07:25:46 2014] [error] value = cache[key] = func(*args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstack
dashboard/wsgi/../../openstackdashboard/api/nova.py",
line 752, in extension
supported
[Tue Aug 05 07:25:46 2014] [error] extensions =
listextensions(request)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/horizon/utils/memoized.py", line 90, in
wrapped
[Tue Aug 05 07:25:46 2014] [error] value = cache[key] = func(*args,
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/share/openstack-dashboard/openstack
dashboard/wsgi/../../openstackdashboard/api/nova.py",
line 743, in list
extensions
[Tue Aug 05 07:25:46 2014] [error] return
novalistextensions.ListExtManager(novaclient(request)).showall()
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/novaclient/v1
1/contrib/listextensions.py",
line 37, in show
all
[Tue Aug 05 07:25:46 2014] [error] return self.list("/extensions",
'extensions')
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/novaclient/base.py", line 64, in _list
[Tue Aug 05 07:25:46 2014] [error] _resp, body =
self.api.client.get(url)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 309, in get
[Tue Aug 05 07:25:46 2014] [error] return self.
csrequest(url, 'GET',
**kwargs)
[Tue Aug 05 07:25:46 2014] [error] File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 301, in
_cs
request
[Tue Aug 05 07:25:46 2014] [error] raise e
[Tue Aug 05 07:25:46 2014] [error] Unauthorized: Unauthorized (HTTP 401)

On Tue, Aug 5, 2014 at 11:33 AM, ZHOU TAO A <tao.a.zhou@alcatel-lucent.com

wrote:

You have the wrong endpoints:

| 791fecb5eb414f2ebf4ebb0700e8f847 | regionOne |
http://192.168.137.102:8774/v2/%(tenant_id)s |
http://192.168.137.102:8774/v2/%(tenant_id)s |
http://192.168.137.102:8774/v2/%(tenant_id)s |

The ip address should be the controller's IP.

On 08/05/2014 11:28 AM, Sujeet Mulmi wrote:

Hi,

I am trying to bring Dashboard up and ended with following error:

[Tue Aug 05 02:16:42 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.101
[Tue Aug 05 02:16:42 2014] [error] DEBUG:urllib3.connectionpool:"POST
/v2.0/tokens HTTP/1.1" 200 1349
[Tue Aug 05 02:16:42 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.101
[Tue Aug 05 02:16:42 2014] [error] DEBUG:urllib3.connectionpool:"GET
/v2.0/tenants HTTP/1.1" 200 143
[Tue Aug 05 02:16:42 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.101
[Tue Aug 05 02:16:42 2014] [error] DEBUG:urllib3.connectionpool:"POST
/v2.0/tokens HTTP/1.1" 200 3967
[Tue Aug 05 02:16:42 2014] [error] Login successful for user "admin".
[Tue Aug 05 02:16:42 2014] [error]
[Tue Aug 05 02:16:42 2014] [error] REQ: curl -i '
http://192.168.137.102:8774/v2/121eae2f2f664dc49265ad11229022c3/extensions'
-X GET -H "X-Auth-Project-Id: 121eae2f2f664dc49265ad11229022c3" -H
"User-Agent: python-novaclient" -H "Accept: application/json" -H
"X-Auth-Token: 352f4915d730d862e28c8ef02a85b029"
[Tue Aug 05 02:16:42 2014] [error]
[Tue Aug 05 02:16:42 2014] [error] INFO:urllib3.connectionpool:Starting
new HTTP connection (1): 192.168.137.102

endpoint is as follows:
[root@controller ~]# keystone endpoint-list

+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+----------------------------------------------+----------------------------------+
| id | region |
publicurl | internalurl
| adminurl | service_id
|

+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+----------------------------------------------+----------------------------------+
| 791fecb5eb414f2ebf4ebb0700e8f847 | regionOne |
http://192.168.137.102:8774/v2/%(tenant_id)s |
http://192.168.137.102:8774/v2/%(tenant_id)s |
http://192.168.137.102:8774/v2/%(tenant_id)s |
60a3a48fdd2d4cbdaedc2209579b5f7d |
| a12049abee334215bea3308a6e01a43d | regionOne |
http://192.168.137.101:9292 |
http://192.168.137.101:9292 |
http://192.168.137.101:9292 | 79f631a6530746458944d8606eeb9289 |
| f9a297aa9cd54f9d8442b703832a9734 | regionOne |
http://192.168.137.101:5000/v2.0 |
http://192.168.137.101:5000/v2.0 |
http://192.168.137.101:35357/v2.0 |
f124253db62f4c56ac324ec77552dbf1 |

+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+----------------------------------------------+----------------------------------+

where 192.168.137.102 = compute node.
192.168.137.101 = controller node.

could anyone suggest how to resovle the issue.

Regards,
Sujeet


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] CentOS 6.5 cloud-init growpart/resizefs does not work on first boot.

Hi stackers,

I have come across this problem of growpart/resizefs not working with
CentOS 6.5 Cloud image on first boot.

Here is the relevant config in cloud.cfg
==============================

growpart:
mode: auto
devices: ["/"]
resizerootfs: True
resize
rootfs_tmp: /dev

cloudinitmodules:
- bootcmd
- write-files
- growpart
- resizefs

Here is the relevant log on first boot:
============================
[CLOUDINIT] helpers.py[DEBUG]: Running config-growpart using lock
(<cloudinit.helpers.DummyLock object at 0x1ed06d0>)
[CLOUDINIT] util.py[DEBUG]: Running command ['growpart', '--help'] with
allowed return codes [0] (shell=False, capture=True)
[CLOUDINIT] util.py[DEBUG]: Reading from /proc/1108/mountinfo (quiet=False)
[CLOUDINIT] util.py[DEBUG]: Read 521 bytes from /proc/1108/mountinfo
[CLOUDINIT] util.py[DEBUG]: Reading from /sys/class/block/vda1/partition
(quiet=False)
[CLOUDINIT] util.py[DEBUG]: Read 2 bytes from
/sys/class/block/vda1/partition
[CLOUDINIT] util.py[DEBUG]: Reading from
/sys/devices/pci0000:00/0000:00:05.0/virtio2/block/vda/dev (quiet=False)
[CLOUDINIT] util.py[DEBUG]: Read 6 bytes from
/sys/devices/pci0000:00/0000:00:05.0/virtio2/block/vda/dev
[CLOUDINIT] util.py[DEBUG]: Running command ['growpart', '--dry-run',
'/dev/vda', '1'] with allowed return codes [0] (shell=False, capture=True)
[CLOUDINIT] util.py[DEBUG]: Running command ['growpart', '/dev/vda', '1']
with allowed return codes [0] (shell=False, capture=True)
[CLOUDINIT] util.py[DEBUG]: resizedevices took 0.076 seconds
[CLOUDINIT] cc
growpart.py[DEBUG]: '/' NOCHANGE: no change necessary
(/dev/vda, 1)
[CLOUDINIT] helpers.py[DEBUG]: Running config-resizefs using lock
(<cloudinit.helpers.DummyLock object at 0x1ed08d0>)
[CLOUDINIT] util.py[DEBUG]: Reading from /proc/1108/mountinfo (quiet=False)
[CLOUDINIT] util.py[DEBUG]: Read 521 bytes from /proc/1108/mountinfo
[CLOUDINIT] ccresizefs.py[DEBUG]: resizeinfo: dev=/dev/vda1 mntpoint=/
path=/
[CLOUDINIT] cc
resizefs.py[DEBUG]: Resizing / (ext4) using resize2fs
/dev/vda1
[CLOUDINIT] util.py[DEBUG]: Running command ('resize2fs', '/dev/vda1') with
allowed return codes [0] (shell=False, capture=True)
[CLOUDINIT] util.py[DEBUG]: Resizing took 0.004 seconds

In the base image, I have upgraded cloud-init to 0.7.4-1.el6, and installed
cloud-utils, cloud-initramfs-tools. After the first reboot,
growpart/resizefs does their job and the root file system is grown to disk
size.

After a reboot, the relevant cloud-init logs:
===================================
ccgrowpart.py[DEBUG]: '/' NOCHANGE: no change necessary (/dev/vda, 1)
util.py[DEBUG]: Resizing took 13.776 seconds
cc
resizefs.py[DEBUG]: Resized root filesystem (type=ext4, val=True)

I wish the growpart/resizefs happen on first boot, what can I do?

--
YY Inc. is hiring openstack and python developers. Interested? Check
http://soa.game.yy.com/jobs.html

--
Thanks,
Yuanle


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

On Wed, Aug 6, 2014 at 4:35 AM, sylecn sylecn@gmail.com wrote:

Hi stackers,

I have come across this problem of growpart/resizefs not working with
CentOS 6.5 Cloud image on first boot.

Which kernel version are you running in the guest?

Here is the relevant config in cloud.cfg

growpart:
mode: auto
devices: ["/"]
resizerootfs: True
resize
rootfs_tmp: /dev

cloudinitmodules:
- bootcmd
- write-files
- growpart
- resizefs

Growpart called by cloud-init only works for kernels >3.8. Only newer
kernels support changing the partition size of a mounted partition. When
using an older kernel the resizing of the root partition happens in the
initrd stage before the root partition is mounted and the subsequent
cloud-init growpart run is a no-op.

Here is the relevant log on first boot:

[CLOUDINIT] helpers.py[DEBUG]: Running config-growpart using lock
(<cloudinit.helpers.DummyLock object at 0x1ed06d0>)
[CLOUDINIT] util.py[DEBUG]: Running command ['growpart', '--help'] with
allowed return codes [0] (shell=False, capture=True)
[CLOUDINIT] util.py[DEBUG]: Reading from /proc/1108/mountinfo
(quiet=False)
[CLOUDINIT] util.py[DEBUG]: Read 521 bytes from /proc/1108/mountinfo
[CLOUDINIT] util.py[DEBUG]: Reading from /sys/class/block/vda1/partition
(quiet=False)
[CLOUDINIT] util.py[DEBUG]: Read 2 bytes from
/sys/class/block/vda1/partition
[CLOUDINIT] util.py[DEBUG]: Reading from
/sys/devices/pci0000:00/0000:00:05.0/virtio2/block/vda/dev (quiet=False)
[CLOUDINIT] util.py[DEBUG]: Read 6 bytes from
/sys/devices/pci0000:00/0000:00:05.0/virtio2/block/vda/dev
[CLOUDINIT] util.py[DEBUG]: Running command ['growpart', '--dry-run',
'/dev/vda', '1'] with allowed return codes [0] (shell=False, capture=True)
[CLOUDINIT] util.py[DEBUG]: Running command ['growpart', '/dev/vda', '1']
with allowed return codes [0] (shell=False, capture=True)
[CLOUDINIT] util.py[DEBUG]: resizedevices took 0.076 seconds
[CLOUDINIT] cc
growpart.py[DEBUG]: '/' NOCHANGE: no change necessary
(/dev/vda, 1)
[CLOUDINIT] helpers.py[DEBUG]: Running config-resizefs using lock
(<cloudinit.helpers.DummyLock object at 0x1ed08d0>)
[CLOUDINIT] util.py[DEBUG]: Reading from /proc/1108/mountinfo
(quiet=False)
[CLOUDINIT] util.py[DEBUG]: Read 521 bytes from /proc/1108/mountinfo
[CLOUDINIT] ccresizefs.py[DEBUG]: resizeinfo: dev=/dev/vda1 mntpoint=/
path=/
[CLOUDINIT] cc
resizefs.py[DEBUG]: Resizing / (ext4) using resize2fs
/dev/vda1
[CLOUDINIT] util.py[DEBUG]: Running command ('resize2fs', '/dev/vda1')
with allowed return codes [0] (shell=False, capture=True)
[CLOUDINIT] util.py[DEBUG]: Resizing took 0.004 seconds

In the base image, I have upgraded cloud-init to 0.7.4-1.el6, and
installed cloud-utils, cloud-initramfs-tools. After the first reboot,
growpart/resizefs does their job and the root file system is grown to disk
size.

There is no cloud-initramfs-tools package for CentOS. You need
cloud-utils-growpart and dracut-modules-growroot from EPEL6 for the initrd
based partition resizing.

After a reboot, the relevant cloud-init logs:

ccgrowpart.py[DEBUG]: '/' NOCHANGE: no change necessary (/dev/vda, 1)
util.py[DEBUG]: Resizing took 13.776 seconds
cc
resizefs.py[DEBUG]: Resized root filesystem (type=ext4, val=True)

These are log messages from cloud-init's growpart run. Can you post the
boot messages from initrd growpart?

...Juerg

I wish the growpart/resizefs happen on first boot, what can I do?

--
YY Inc. is hiring openstack and python developers. Interested? Check
http://soa.game.yy.com/jobs.html

--
Thanks,
Yuanle


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Assign IP to Instances from External DHCP Server

Hi,

We have an openstack all in one setup and this box have 1 Ethernet card
(eth0) which connected to external DHCP (10.x.x.x). We created a ovs
bridge(br-ex) and add it to eth0 and also another ovs bridge for (br-int)
for instance internally communication. Currently if we create a public
network and subnet (with disable DHCP) then also floating IP not taken from
external DHCP server.

Is there any step by step documents which we can use to get IP for
instances directly from external DHCP?

--
Praveen Kumar


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Is there spice client for openstack like xvpvncclient?

Hi All,
I have installed spice on openstack successfully ,but only spice-html5 client is available. I want to know whether exists spice client like xvpvnc-client for openstack,
otherwise i have to write it by myself. Thanks a lot.

zhchaobeyond@gmail.com
Charle Zhang


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] From single flat to multi flat provider network

Hello,

we currently have a multi flat provider network neutron setup, we are
not using any L3 neutron service, just the dhcp server and the metadata
agent. There are currently 8 compute nodes in the initial network. Now
we want to expand and have compute nodes in different sub-nets. So this
will be a change from single to multi flat provider network.

The sub-nets are routed by a physical router which is also the gateway
for the compute nodes and instances. Each sub-net is VLAN taged (by the
provider) corresponding to the sub-net. For now we not care about the
VLAN tags, its just to give you all information to find the appropriated
solution.

10.1.200.0/24 (VLAN 200) (GW: 10.1.200.254) -> 8 compute nodes +
Management Node (Glance, Keystone, Neutron etc.)
10.1.177.0/24 (VLAN 177) (GW: 10.1.177.254) -> new x compute nodes

Now the questions:

What configuration changes need to be done to change from single to
multi "mode"
What about the metadata and the dhcp server, this can not be routed?!
How to solve it, IP-Helper address in the router or one neutron
installation per sub-net?

Here is the m12 config:

egrep "^[^#\s]" ml2_conf.ini

typedrivers = local,flat
mechanism
drivers = openvswitch,l2population
flatnetworks = *
enable
securitygroup = True
firewall
driver =
neutron.agent.linux.iptablesfirewall.OVSHybridIptablesFirewallDriver
enable
tunneling = False
localip = 10.1.200.8
network
vlanranges = physnet1
bridge
mappings = physnet1:br-bond1

Thank you!

Kind regards,
Christoph


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] Your upcoming trip: Leeds, England GBR -wA 08/06qq/2014 - (Itinerary # 180105463474)

Qm

-mMatt ()
On Aug 4, 2014 9:45 PM, "Expedia.com" Expedia@expediamail.com wrote:

http://click.global.expediamail.com/?qs=e28560b5ce41086389a865b9c48a088eaa5c3ef9f558474a6c04084ccfae2e3b319b1ab3c9755b38
http://click.global.expediamail.com/?qs=e28560b5ce410863789dd759b578dfee029386e40283d8cbeb1187eeb69833dac34d82ee57de0efe
Ready to go?
http://click.global.expediamail.com/?qs=dae56e051e14f54338d1a20094370a67277976d37a7adc1794cc6ef06868d14453bf29b78fc2d5b8
| View this email in a web browser
http://click.global.expediamail.com/?qs=134bc87d788aa5fd9cef491bd18de548eadebb36517680ca76475decb50e4177158b10e8b78d4a96 [image:
Expedia.com]
http://click.global.expediamail.com/?qs=134bc87d788aa5fd53bb58338f6f544f239c82bdc0850cce236f0d58d5cdb3dd03fb2c1f628e75c6
Packages
http://click.global.expediamail.com/?qs=134bc87d788aa5fddff0ce9bb7974f65aa00398deaa72e78fd91de85eb4374440896ee5e9d34876c
Hotels
http://click.global.expediamail.com/?qs=134bc87d788aa5fd338d70a1b274cd62ac8824017960a9a268d1e58f415370ce4db855393215b248
Cars
http://click.global.expediamail.com/?qs=134bc87d788aa5fd042621c80877a2bd1e794365c8c8e9093255a2f50b647dc38baab63b9fb1d603
Flights
http://click.global.expediamail.com/?qs=134bc87d788aa5fdb90b8f34def50cafd06cfb782b91597b1b19cf31b8f10a91628352f8ce4b76f2
Cruises
http://click.global.expediamail.com/?qs=ade676f08e51e6705fa53e28f0deb7956f94b16c6e91fc18ca75c8eaa81e0730ad1f71516dca1aff Things
to Do
http://click.global.expediamail.com/?qs=ade676f08e51e6709f1d69f2e9e46c61e6dbe162fe52b817d9b42e7114b1f91072cedf763de1de3e
Deals
http://click.global.expediamail.com/?qs=ade676f08e51e670bdbce8b47b525d2e6639e4121acbd0743bd4c6389ee95c941e505fd0762c9154
Rewards
http://click.global.expediamail.com/?qs=ade676f08e51e670b5326784cb023d7636f990ee5c1cd034a7960e1559b6a469314ae9effc373080 [image:
SuitcaseIcon] Get Ready for Your Trip
Please save this email for future reference Thank you for booking
your travel with Expedia. You can access all your trip details using your
itinerary number online or via the Expedia App. Itinerary
Number 180105463474
http://click.global.expediamail.com/?qs=ade676f08e51e6703bdca51ba73e9ab3dc02daffbcbfb095a602ea1ad9944532a8eae83104535412 [image:
Hotel
Icon] Hotel Before you travel, remember to: 1. Access and
print your itinerary by clicking the itinerary number above. 2. Use the
Expedia App
http://click.global.expediamail.com/?qs=4a797f4018ee5748b09bf85bc4ae275bb4155085d33003e55dd16ebd6a152a66203d6f436aecd11f
to instantly access the hotel address, check-in and checkout times, plus
more. 3. Contact the hotel for any special requests. Itineraries
in real time.
http://click.global.expediamail.com/?qs=ade676f08e51e67037f1e152c34aaffb190cc41a7d4a8cb041f85ae5380a48d25523600ac6ab4bc6 Your
details anytime, anywhere. Get the Free App »
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d872ab03a28532c4c5d3c4d829a0b79d9be248425888a8defc2bfa53a4c44a603f4 [image:
Apple App Store]
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d872342032617ead8ae0e49c9a02da547c584ec148832e0d70ccc4b146c8bfeca80 [image:
Google Play]
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d87ef3ecd0d1186590547f3108eba148ead8d4888cb40def7bc52fa6f438555bff0 [image:
Windows Phone App]
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d8798b7753fc430d8a03ecb61ae793f81d2b7e82ce95543272eb55e3bb8e3d1bbff [image:
Mobile Image] [image: Mobile App]
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d87c23f093b596879079b0cf27f83b04357aed33f1913705ef4766c546c90e5c465 Book
on our app & get
TRIPLE Expedia+ rewards points
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d87cc0adc7a8686819757e52ee6fd7f7d016f2c5611346226655900ba5e4e66ee35 [image:
Facebook]
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d87aacecbec197f03be1dabe386d99b843c47dbf9babc0ef6e02e5b5fc1e93039db [image:
Twitter]
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d87f0b6d2c93efbbc316fcd2bf43b9a7a4356f7598793f53419fce6ec13d8b43723 [image:
Google plus]
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d87c6f5b3c2d31eaef0b08ed2d6a1e40dead046dabfe64e7874cdbbd8a43c25d772 [image:
Youtube]
http://click.global.expediamail.com/?qs=9ac46be2a7dc7d8768da6708ad6e94c45f1a49fb5fffaec2d2c536553c66fc61fb49bd33d6b00bcf [image:
Pinterest]
http://click.global.expediamail.com/?qs=70df760d63d8ac84d057891869b4a5cc559c69daec4fe2109460e1eee428f04df494ca107dd9ada7 My
Account
http://click.global.expediamail.com/?qs=70df760d63d8ac84be23612f7ec659ec425221ab3c344bb4df2b20edb47272284e956a367871e6ce |
Update Preferences
http://click.global.expediamail.com/?qs=70df760d63d8ac84db1a727ea755400f965e6d5731ea3eb657d3f546ecbefb841737c76e72970013 |
Privacy Policy
http://click.global.expediamail.com/?qs=70df760d63d8ac84a2dbbdd9cf7540210e303977dcc956fe07c9403286a9737f290bb25941e3f7f6 |
Add us to your address book
http://click.global.expediamail.com/?qs=70df760d63d8ac84b5fab7675a22f8f635c057145c833aac18bd8b5ed81ef6ff0d02742e050bb294 |
Customer Service
http://click.global.expediamail.com/?qs=70df760d63d8ac842e924bd27b21c00ee42fb4fdd4c62c57c7c199663753f8a636b193eb6102e8e7
CONTACT US
To contact us or send feedback, please click here or contact us via postal
mail at: Expedia, Inc., attn: DBM Team, 333 108th Avenue NE, Bellevue, WA
98004.

CST# 2029030-50

© 2014 Expedia, Inc. All rights reserved. Expedia, Expedia Extras, Trend
Tracker, Best Price Guarantee, Insiders' Select, Expedia+ rewards, +VIP
Access and the Airplane logos are registered trademarks, or trademarks, of
Expedia, Inc. in the U.S. and/or other countries. All other products are
trademarks of their respective owners. (EMID:
PT-ETM-PRE2-teid1.0-issuX-testX-langEN-versX-mcidC-segaX-segbX-segmX-key1000175311158-paid123641060)(MD:
20140804023424)(ETID: 913449)


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [openstack][nova][vwware]Does present vmware driver support resource pool?

Hi all:

I found that if I configure resource pool as the cluster way:
/etc/nova/nova.conf
cluster_name=

It works and retrieve the resource pool as a cluster. Such as:

I want to know whether it is as design. Is there official doc?

Thanks.

Thanks & Best Regards ,
Kai KT Tong


Cloud & Smarter Infrastructure
China Development
China Software Development Lab,
Notes ID: Kai KT Tong/China/IBM@IBMCN
E-Mail :  tongkai@cn.ibm.com
Tel:  86-10-82453574

Shangdi Software Park Ring Build 2RW337

Shangdi, Haidian District,

Beijing 100193, P.R.China


Hi,

There was talk at the last summit about supporting resource pools. My recollection is that there were too many edge cases found. Please note that this is something that I not officially tested with minesweeper. 

Thanks

Gary

From: Kai KT Tong tongkai@cn.ibm.com
Date: Wednesday, August 6, 2014 at 1:38 PM
To: "openstack@lists.openstack.org" openstack@lists.openstack.org
Subject: [Openstack] [openstack][nova][vwware]Does present vmware driver support resource pool?

Hi all:

I found that if I configure resource pool as the cluster way:
/etc/nova/nova.conf
cluster_name=

It works and retrieve the resource pool as a cluster. Such as:

I want to know whether it is as design. Is there official doc?

Thanks.

Thanks & Best Regards ,
Kai KT Tong


Cloud & Smarter Infrastructure
China Development
China Software Development Lab,
Notes ID: Kai KT Tong/China/IBM@IBMCN
E-Mail :  tongkai@cn.ibm.com
Tel:  86-10-82453574

Shangdi Software Park Ring Build 2RW337

Shangdi, Haidian District,

Beijing 100193, P.R.China


Re: [Openstack] Swift statistics discrepancy

I have been doing some further digging into this and have found
information which leads me to believe that replication is not working as
it should...

In this cluster, we have 1 account which holds the majority of data,
for the sake of this example, this account is 41677 - it holds 34TB of
data.

Looking at the accounts sqlite db for this account on all nodes, I
notice the incomingsync and outgoingsync have remote_id entries which
I cannot locate anywhere:

sqlite> select * from incomingsync;
remote
id syncpoint updatedat


9332d177-1034-44e9-b77e-961a7ee7da6d 308256694 1406830765
d87e4dea-1c42-4f3f-8462-76227acc7c32 301384851 1406830765
0b84aac5-d16e-4d76-9903-eb9122c19119 310265599 1406836822

As you can see above, those are the nodes the "incoming" replication is
expected from - however those ID's are not present on any other node
with the same account. Hence the amount of data reported on some nodes
is less than 34TB.

Why would this be? What can I do to fix this to ensure replication
resumes correctly?

Thanks,

Pritpal

On 2014-08-05 13:06, pritpal@tech-guides.co.uk wrote:

Hi All,

We are running Swift 1.4.8 with 8 nodes and 4 zones.

We recently added 4 SSD drives to one each to 4 of our storage nodes.
The accounts and container rings were then rebalanced to ensure this
data doesn't sit on spinning disks. Since the rebalance was done, we
have noticed something unusual in the statistics returned from within
swift.

This is the command being run to grab the statistics:

swift -v -A https://127.0.0.1:8080/auth/v1.0 -U -K
stat

Before the changes, the statistics looked like this:

===
Wed, 30 Jul 2014 10:51:26 +0100
Array
(
[X-Account-Object-Count] => 81473735
[X-Account-Bytes-Used] => 34156718530011
[X-Account-Container-Count] => 6510
)
Wed, 30 Jul 2014 10:51:36 +0100
Array
(
[X-Account-Object-Count] => 81473735
[X-Account-Bytes-Used] => 34156718530011
[X-Account-Container-Count] => 6510
)
Wed, 30 Jul 2014 10:51:46 +0100
Array
(
[X-Account-Object-Count] => 81698252
[X-Account-Bytes-Used] => 34213134745373
[X-Account-Container-Count] => 6510
)
Wed, 30 Jul 2014 10:51:56 +0100
Array
(
[X-Account-Object-Count] => 81687266
[X-Account-Bytes-Used] => 34209086906883
[X-Account-Container-Count] => 6510
)
Wed, 30 Jul 2014 10:52:06 +0100
Array
(
[X-Account-Object-Count] => 81687418
[X-Account-Bytes-Used] => 34209165517185
[X-Account-Container-Count] => 6510
)
Wed, 30 Jul 2014 10:52:16 +0100
Array
(
[X-Account-Object-Count] => 81405109
[X-Account-Bytes-Used] => 34105818678331
[X-Account-Container-Count] => 6510
)
Wed, 30 Jul 2014 10:52:26 +0100
Array
(
[X-Account-Object-Count] => 81460103
[X-Account-Bytes-Used] => 34127360552723
[X-Account-Container-Count] => 6510
)
===

Since the rebalancing, the statistics seem to show that
X-Account-Bytes-Used has dropped by around 7TB and
X-Account-Object-Count seems to have dropped to somewhere between 60M
- 70M objects. The statistics now seem to jump around wildly, as can
be seen below.

===
Tue, 05 Aug 2014 12:32:49 +0100
Array
(
[X-Account-Object-Count] => 59242579
[X-Account-Bytes-Used] => 24304403925249
[X-Account-Container-Count] => 6603
)
Tue, 05 Aug 2014 12:32:59 +0100
Array
(
[X-Account-Object-Count] => 58817476
[X-Account-Bytes-Used] => 24167437130211
[X-Account-Container-Count] => 6603
)
Tue, 05 Aug 2014 12:33:09 +0100
Array
(
[X-Account-Object-Count] => 63760679
[X-Account-Bytes-Used] => 25828018327577
[X-Account-Container-Count] => 6603
)
Tue, 05 Aug 2014 12:33:19 +0100
Array
(
[X-Account-Object-Count] => 66724351
[X-Account-Bytes-Used] => 27197208718607
[X-Account-Container-Count] => 6603
)
Tue, 05 Aug 2014 12:33:29 +0100
Array
(
[X-Account-Object-Count] => 67222017
[X-Account-Bytes-Used] => 27465314723569
[X-Account-Container-Count] => 6603
)
Tue, 05 Aug 2014 12:33:39 +0100
Array
(
[X-Account-Object-Count] => 67214198
[X-Account-Bytes-Used] => 27536268561101
[X-Account-Container-Count] => 6603
)
Tue, 05 Aug 2014 12:33:49 +0100
Array
(
[X-Account-Object-Count] => 68353884
[X-Account-Bytes-Used] => 28017869874871
[X-Account-Container-Count] => 6603
)
===

The above is repeated, the count increases, then drops back to down.
The question I have is, why would this happen? We definitely did not
delete anything, so as far as I am concerned data was just moved
around.

You can see the behaviour on these graphs -
http://www.preeto.co.uk/SwiftStats.PNG - Note how prior to the change
(2014-07-31), the totalbytes and totalobjects graphs are fairly
static.

Regards,

Pritpal


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Neutron issue "ERROR: Multiple possible networks found, use a Network ID to be more specific"

Dear All...

1) I'm running Openstack Havana with Neutron.

---*---

2) In my setup, I have two networks configured, a private and a public one.

nova net-list

+--------------------------------------+---------+------+
| ID | Label | CIDR |
+--------------------------------------+---------+------+
| 44a731ad-72c9-4915-9bb6-a6c86e37449c | private | - |
| 82c1ff14-8922-4985-9153-5b9e18d81803 | public | - |
+--------------------------------------+---------+------+

---*---

3) I can only start a machine using nova client if I specify the
network, otherwise, it returns "ERROR: Multiple possible networks found,
use a Network ID to be more specific"

a) # nova boot --flavor 2 --image fe16c307-86d3-4cdc-b2e9-09bc0e9603e0
Test2
ERROR: Multiple possible networks found, use a Network ID to be more
specific. (HTTP 400) (Request-ID: req-15da2328-29a5-4290-aec7-1f4a68aaa793)

b) # nova boot --flavor 2 --image fe16c307-86d3-4cdc-b2e9-09bc0e9603e0
--nic net-id=44a731ad-72c9-4915-9bb6-a6c86e37449c Test2
+--------------------------------------+--------------------------------------------------+
| Property |
Value |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig |
MANUAL |
| OS-EXT-AZ:availabilityzone |
nova |
| OS-EXT-SRV-ATTR:host |
- |
| OS-EXT-SRV-ATTR:hypervisor
hostname |
- |
| OS-EXT-SRV-ATTR:instancename |
instance-0000002b |
| OS-EXT-STS:power
state |
0 |
| OS-EXT-STS:taskstate |
scheduling |
| OS-EXT-STS:vm
state |
building |
| OS-SRV-USG:launchedat |
- |
| OS-SRV-USG:terminated
at |
- |
| accessIPv4 | |
| accessIPv6 | |
| adminPass |
W2g3EbpXTgDG |
| configdrive | |
| created |
2014-08-06T16:02:55Z |
| flavor | m1.small
(2) |
| hostId | |
| id |
6a9d2e80-f883-4d9b-addb-e974ec18c138 |
| image | Fedora 19
(fe16c307-86d3-4cdc-b2e9-09bc0e9603e0) |
| key
name |
- |
| metadata |
{} |
| name |
Test2 |
| os-extended-volumes:volumesattached |
[] |
| progress |
0 |
| security
groups |
default |
| status |
BUILD |
| tenantid |
28d05f91417d4cef9859bd68b0b45830 |
| updated |
2014-08-06T16:02:56Z |
| user
id |
31f7fd90395d4de1beda8bc44928503e |
+--------------------------------------+--------------------------------------------------+

---*---

4) I do not understand why it is mandatory to start a machine with a
network interface already attached. As far as I know, this was not the
case when nova-network was used.

My underlying problem with the way this is working now is that I'm part
of a cloud federation, which provides rocci clients to the users.
Currently, using rocci, users start a VM and then attach a network to it
in two different steps. There is no way to do these two operations in
rocci in the same step. As a consequence, they can not start machines
when we have an OpenStack with Neutron.

---*---

5) Finally, I wonder if there is a way to actually establish a default
network. The desired behaviour would be to associate a VM to a default
network, if nothing is stated otherwise.

Thank you for any help
Cheers
Goncalo


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Neutron VLANs issue

Dear All,

I have 2 node Openstack setup on Ubuntu 14.04

Controller+Network and Compute Node, I am using Neutron with ml2 (vlan).

Both nodes have 2 NICs

eth0 - Public with br-ex
eth1 - Local with br-inet

Both nodes' local ethernet (eth1) connected with each other without any
switch. They were both working perfect, Now few days ago I bought Cisco
2960 and connect both node's eth1 with Cisco 2960 switch.

Now when I reboot or create any new VMs they are not getting ip from
controller node. Here is the log of vm

cloud-init start-local running: Wed, 06 Aug 2014 16:28:25 +0000. up 3.02
secondsno instance data found in start-localcloud-init-nonet waiting 120
seconds for a network device.

Would you please help me how I can solve this issue?

Br.

Umar


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

You most likely need to configure the Cisco switch so that both switch
ports connected to eth1 on each sysyem are trunk ports and accept the full
range of tenant vlans.

Something like:
conf t
int eth1/0
switchport mode trunk
switchport trunk allow vlan 100-199

(repeat for the second port)

On Wed, Aug 6, 2014 at 11:41 AM, Umar Draz unix.co@gmail.com wrote:

Dear All,

I have 2 node Openstack setup on Ubuntu 14.04

Controller+Network and Compute Node, I am using Neutron with ml2 (vlan).

Both nodes have 2 NICs

eth0 - Public with br-ex
eth1 - Local with br-inet

Both nodes' local ethernet (eth1) connected with each other without any
switch. They were both working perfect, Now few days ago I bought Cisco
2960 and connect both node's eth1 with Cisco 2960 switch.

Now when I reboot or create any new VMs they are not getting ip from
controller node. Here is the log of vm

cloud-init start-local running: Wed, 06 Aug 2014 16:28:25 +0000. up 3.02
secondsno instance data found in start-local cloud-init-nonet waiting 120
seconds for a network device.

Would you please help me how I can solve this issue?

Br.

Umar


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Compute V2 API Extensions: No HTTP Status error codes ?

In Compute V2 API extensions (
http://developer.openstack.org/api-ref-compute-v2-ext.html ) , the expanded
documentation for an API obtained by clicking the detail button next to an
method shows only normal correct HTTP status codes. For e.g.

Normal response codes
200

Why aren't the HTTP status error codes specified like in other APIs ?
( for e.g. Compute V2 API -
http://developer.openstack.org/api-ref-compute-v2.html )

Thanks,
--
Sekhar Vajjhala


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Sekhar,
Each extension is documented in a separate WADL file, and the WADL can be
revised to display the more descriptive status error codes.

Log a doc bug at http://bugs.launchpad.net/openstack-api-site for any
extensions missing the extra information you seek.

Thanks,
Anne

On Wed, Aug 6, 2014 at 2:47 PM, Sekhar Vajjhala sekharv01@gmail.com wrote:

In Compute V2 API extensions (
http://developer.openstack.org/api-ref-compute-v2-ext.html ) , the
expanded documentation for an API obtained by clicking the detail button
next to an method shows only normal correct HTTP status codes. For e.g.

Normal response codes
200

Why aren't the HTTP status error codes specified like in other APIs ?
( for e.g. Compute V2 API -
http://developer.openstack.org/api-ref-compute-v2.html )

Thanks,
--
Sekhar Vajjhala


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Docker] nova-compute failed to start with docker hypervisor.

Hi everyone~!

I've been trying to deploy docker hypervisor environments.

However, the nova-compute didn't start because of below reason.

[nova-compute LOG]
2014-08-07 00:27:38.946 11777 INFO nova.openstack.common.periodictask [-]
Skipping periodic task _periodic
updatedns because its interval is negative
2014-08-07 00:27:38.995 11777 INFO nova.virt.driver [-] Loading compute
driver 'novadocker.virt.docker.driver.DockerDriver'
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
Traceback (most recent call last):
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py",
line 117, in wait
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
x.wait()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py",
line 49, in wait
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
return self.thread.wait()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168,
in wait
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
return self.
exitevent.wait()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
return hubs.get
hub().switch()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in
switch
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
return self.greenlet.switch()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194,
in main
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
result = function(*args, **kwargs)
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py",
line 480, in run_service
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
service.start()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 163, in start
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
self.manager.init_host()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1012,
in init_host
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
self.driver.init_host(host=self.host)
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File
"/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py",
line 82, in init_host
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
_('Docker daemon is not running or is not reachable'
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
NovaException: Docker daemon is not running or is not reachable (check the
rights on /var/run/docker.sock)
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup

I also let nova user belong to docker group and gave authorization into
docke.sock.

in /etc/group file

nova:x:115:
cinder:x:116:
docker:x:117:nova

and..

root@compute1:/etc/nova/rootwrap.d# ll /var/run/docker*
-rwxrwxrwx 1 nova root 5 Aug 7 00:11 /var/run/docker.pid*
srwxrwxrwx 1 nova docker 0 Aug 7 00:11 /var/run/docker.sock=

But the same message occurs..

Could you anybody tell me how to solve the issue?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi yongiman,

I'm faced as same problem in my environment.

  • OS: Ubuntu 14.04 64bit
  • OpenStack: recent master(constructed by devstack at 8/11)
  • Docker: 0.9.1 (default for Ubuntu 14.04)

But I could solved it.

The nova-docker call the docker ping API with "v1.13" prefix.
So, it need to upgrade docker version to correspond v1.13 API.

(And now, the version prefix is hard-coded on nova-docker.
You can not change it.)

http://docs.docker.io.s3-website-us-west-2.amazonaws.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit

Please check your docker version.

Regards,

--
Takahiro Shida.

(2014/08/07 13:20), 한승진 wrote:
Hi everyone~!

I've been trying to deploy docker hypervisor environments.

However, the nova-compute didn't start because of below reason.

[nova-compute LOG]
2014-08-07 00:27:38.946 11777 INFO nova.openstack.common.periodictask
[-] Skipping periodic task _periodic
updatedns because its interval is
negative
2014-08-07 00:27:38.995 11777 INFO nova.virt.driver [-] Loading compute
driver 'novadocker.virt.docker.driver.DockerDriver'
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
Traceback (most recent call last):
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py",
line 117, in wait
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
x.wait()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py",
line 49, in wait
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
return self.thread.wait()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line
168, in wait
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
return self.
exitevent.wait()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
return hubs.get
hub().switch()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187,
in switch
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
return self.greenlet.switch()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line
194, in main
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
result = function(*args, **kwargs)
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py",
line 480, in run_service
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
service.start()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 163, in start
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
self.manager.init_host()
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1012, in init_host
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
self.driver.init_host(host=self.host)
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
File
"/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py",
line 82, in init_host
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
_('Docker daemon is not running or is not reachable'
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup
NovaException: Docker daemon is not running or is not reachable (check
the rights on /var/run/docker.sock)
2014-08-07 00:27:39.081 11777 TRACE nova.openstack.common.threadgroup

I also let nova user belong to docker group and gave authorization into
docke.sock.

in /etc/group file

nova:x:115:
cinder:x:116:
docker:x:117:nova

and..

root@compute1:/etc/nova/rootwrap.d# ll /var/run/docker*
-rwxrwxrwx 1 nova root 5 Aug 7 00:11 /var/run/docker.pid*
srwxrwxrwx 1 nova docker 0 Aug 7 00:11 /var/run/docker.sock=

But the same message occurs..

Could you anybody tell me how to solve the issue?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] nova-network issues

Hi All,

       I set up an openstack all-in-one environment with nova-network.

I create a network 172.16.10.0/24 which is the same as the real local area
network. When I create an instances, The local area network can not work
correctly sometimes.

      I used arping command and found there were two gateway. One is

producted by dnsmasq, and the other is physical gateway. I add
''dhcp-option:3,172.16.10.1" in dnsmasq.conf. But it seems that it doesn't
work.

     Is there any way to make the dnsmasq using the physical gateway?

     Any advice is woulb really be appreciated.

Best regards


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] How to see the storage information from openstack dashboard ?

Hi,
How can i see the avaiulable storage space from the OS dashboard ?
Admin >> Project >> overview is just displaying the quota .. _______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

You cannot.
On 08/07/2014 03:13 PM, Mridhul Pax wrote:

Hi,

How can i see the avaiulable storage space from the OS dashboard ?

Admin >> Project >> overview is just displaying the quota ..


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Openstack networking usecase

Hello All,

I need your help in following networking scenario:

  • I bring up 2 VM's in a compute Node via Openstack Controller Dashboard
  • these two VM's below to same virtual Network
  • Each VM gets an virtual IP address
  • SCENARIO: Is it possible that, each virtual IP is given assigned an
    access-vlan and pass the vlan traffic to outside physical/real port that is
    connected to Trunk port of an external switch?
  • VM1 and VM2 shouldnt talk to each other without passing through the
    trunk port

Example:
VM1 gets a virtual IP address lets say 10.0.0.20/24., how to assign a
access vlan, lets say vlan 10.,

VM2 gets a virtual IP address lets say 10.10.0.20/24., how to assign a
access vlan, lets say vlan20(access port)

My physical interface of the compute node is ETH0., connected to trunk port
of a switch.,

VM1 and VM2 shouldnt talk to each other until and unless they pass through
the trunk and gets the packet processed further by an external l3 switch..

does it require an explicit configuration of openVswitch ??

--
Thanks,
-Ram


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

Please go through this link ,please revert if you have any issues.

https://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks/

From: Ramprasad Velavarthipati [mailto:rampras.vm@gmail.com]
Sent: Thursday, August 07, 2014 2:45 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Openstack networking usecase

Hello All,

I need your help in following networking scenario:

  • I bring up 2 VM's in a compute Node via Openstack Controller Dashboard
  • these two VM's below to same virtual Network
  • Each VM gets an virtual IP address
  • SCENARIO: Is it possible that, each virtual IP is given assigned an access-vlan and pass the vlan traffic to outside physical/real port that is connected to Trunk port of an external switch?
  • VM1 and VM2 shouldnt talk to each other without passing through the trunk port

Example:
VM1 gets a virtual IP address lets say 10.0.0.20/24., how to assign a access vlan, lets say vlan 10.,

VM2 gets a virtual IP address lets say 10.10.0.20/24., how to assign a access vlan, lets say vlan20(access port)

My physical interface of the compute node is ETH0., connected to trunk port of a switch.,

VM1 and VM2 shouldnt talk to each other until and unless they pass through the trunk and gets the packet processed further by an external l3 switch..

does it require an explicit configuration of openVswitch ??

--
Thanks,
-Ram

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.

Re: [Openstack] [neutron] how to limit bandwidth on floating ip?

Am 05.08.2014 um 10:34 schrieb xiaoyang.yu:

for example, limit bandwidth on Specific floating ip less than 5M?
does openstack support this function or i must do this on myself using
some Technology?

thanks


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

hmmm i've no idea if you can change the port settings directly. A
possibility is to make a flavor for special vms with
rxtx_factor = 0.5 (==5Mb/s if you have 100Mbit/s connection)

Cheers
heiko

--
anynines.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Neutron doesn't support rate limit right now. You can apply TC to router gateway interface to achieve it manually.

cheers,
Li Ma

----- Original Message -----
From: "Heiko Krämer" hkraemer@anynines.com
To: openstack@lists.openstack.org
Sent: 星期四, 2014年 8 月 07日 下午 5:21:21
Subject: Re: [Openstack] [neutron] how to limit bandwidth on floating ip?

Am 05.08.2014 um 10:34 schrieb xiaoyang.yu:
for example, limit bandwidth on Specific floating ip less than 5M?
does openstack support this function or i must do this on myself using
some Technology?

thanks


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

hmmm i've no idea if you can change the port settings directly. A
possibility is to make a flavor for special vms with
rxtx_factor = 0.5 (==5Mb/s if you have 100Mbit/s connection)

Cheers
heiko

--
anynines.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] nova-network,VRRP and NAT

Hi,
I am using nova-network on Havana in multi-node setup with almost 20
instances that are web servers.
i am planning to use HA proxy cluster with keepalived for fail
over.

but concerned whether nova security policies allow VRRP to work as it
requires multiple IP on same MAC?

Is clearing the rule only way to make it work,or is there nova-network way
to make it work.

also i am worried about NAT rule when IP fail over happens

any way to make this work using nova-network

Thanks,


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 07/08/14 21:42, mad Engineer wrote:
but concerned whether nova security policies allow VRRP to work as it
requires multiple IP on same MAC?

Is clearing the rule only way to make it work,or is there nova-network
way to make it work.

also i am worried about NAT rule when IP fail over happens

This might help - copied from a note I put on our ops wiki:

OpenStack has anti-spoofing iptables rules that sit very close to your
instance on the hypervisor. This means you can't just add a new address
without telling OpenStack. To tell OpenStack, you need to add an
allowed-address-pair to the port which your instance will use with the
new IP.

For example: I have a VM with a fixed IP of 10.1.1.13. I want to add
the alias IP 10.1.1.14 to that and one other VM, for load balancing.

First, make sure you aren't using an IP in the DHCP range for this
subnet. Then update the Ports for each instance participating in VRRP.

nova interface-list
+------------+--------------------------------------+--------------------------------------+--------------+-------------------+
| Port State | Port ID | Net
ID | IP addresses | MAC Addr |
+------------+--------------------------------------+--------------------------------------+--------------+-------------------+
| ACTIVE | 50eb611d-5e71-43cf-ba4d-1017bc6e488c |
623417c3-dffc-4b6d-96fa-a4ae0ec1df52 | 10.1.1.13 | fa:16:3e:5b:64:38 |

neutron port-update 50eb611d-5e71-43cf-ba4d-1017bc6e488c \
--allowed-address-pairs type=dict list=true \
macaddress=fa:16:3e:5b:64:38,ipaddress=10.1.1.14

Once you have updated the ports attached to each VM, you will need some
security group rules.

neutron security-group-create vrrpmembers
neutron security-group-rule-create --ethertype IPv4 \
--direction egress --protocol 51 \
--remote-ip-prefix 224.0.0.18/32 vrrp
members
neutron security-group-rule-create --ethertype IPv4 \
--direction ingress --protocol 51 \
--remote-group-id vrrpmembers vrrpmembers

Then apply this security group to your VRRP instances.

[Openstack] [Murano] Screencasts series: Integration with Heat Orchestration Templates

Hi folks,

I'm glad to announce a new screencast about Murano's integration with Heat
Orchestration Templates.

Besides usual package format which will be discussed in further screencasts,
Murano applications can be deployed from HOT templates.

The following screencast http://youtu.be/oRD3ihwa9u4[1] will tell you how to
do that.

Enjoy watching.

Waiting for your questions at #murano channel.

[1] - http://youtu.be/oRD3ihwa9u4


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Fuel next release

Hi
When is the next release of Mirantis Fuel with Ubuntu 14.04/Icehouse support expected?
Ajay


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Ajay,

  Icehouse support is already currently in the 5.0 version of Fuel.  Adding support for Ubuntu 14.04.x as a HostOS is currently under evaluation / development and is currently expected to be folded into the Fuel Project sometime near the Juno summit.  Even with the current version 5.x of Fuel, one can run Ubuntu 14.04 as a GuestOS – it’s just the HostOS that is under evaluation / development for the future release.

  Is there a particular capability / feature of Ubuntu 14.04 that you’re finding critical when acting as the HostOS that Ubuntu 12.04.04 is not providing?

Thanks,

  • David J. Easter

  Director of Product Management,   Mirantis, Inc.

  

http://openstacksv.com/

From: "Ajay Kalambur (akalambu)" akalambu@cisco.com
Date: Thursday, August 7, 2014 at 9:12 AM
To: "openstack@lists.openstack.org" openstack@lists.openstack.org
Subject: [Openstack] Fuel next release

Hi

When is the next release of Mirantis Fuel with Ubuntu 14.04/Icehouse support expected?

Ajay

_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Swift And Custom HTTP Reponse Headers

Hello.

I am considering building out a Swift cluster to act as an origin to some
edge cache servers. Are there any readily available API
extensions(preferably OSS) which allow for setting max-age and expires
headers?

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I thought there was a config option for setting which arbitrary http
headers (non-meta) you allow to be stored with objects, but I can't fid it
on:

http://docs.openstack.org/developer/swift/deployment_guide.html

On Thu, Aug 7, 2014 at 9:25 AM, Brent Troge brenttroge2016@gmail.com
wrote:

Hello.

I am considering building out a Swift cluster to act as an origin to some
edge cache servers. Are there any readily available API
extensions(preferably OSS) which allow for setting max-age and expires
headers?

Thanks!


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Swift And IIS

I am looking to store smooth streaming assets into my swift cluster. I can
instruct IIS 7 to reverse proxy into swift, however that doesn't seem a
supported configuration for IIS Media Services to read the smooth streaming
assets. It appears IIS Media services expects the VOD assets to be local or
accesible via CIFS. Does anyone have experience serving smooth streaming
VOD assets stored in a swift cluster? I have been able to serve the
smooth streaming assets from swift if using Apache and mod_smoothstreaming.
However, not all IIS media functionality is found in the Apache module. So
I am stuck with IIS Media services.

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

crunchyroll open-sourced this:

https://github.com/crunchyroll/swiftmp4

.. which last I checked supported time based offset streaming directly
through the swift proxies using a cool buffering hack. But IIRC, it's
pretty tied to the mp4 format. I think it was based on.... maybe...

http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Apache-Version2

dunno if the email I have for Young Kim is current. I'm pretty he wasn't
using IIS or even Apache as a front end but instead just served from swift
proxies directly.

Do you by chance have a simple explanation of the disadvantage of straight
up HTTP pseudo streaming using Swift's native range support to serve video
directly from swift? Cause I always just upload mp4's, setup some
container-acl's and point chrome at the url and it "just works"...

/me continues his quest for even a basic understanding of the state of the
art for video streaming on the web

On Thu, Aug 7, 2014 at 1:54 PM, Brent Troge brenttroge2016@gmail.com
wrote:

I am looking to store smooth streaming assets into my swift cluster. I can
instruct IIS 7 to reverse proxy into swift, however that doesn't seem a
supported configuration for IIS Media Services to read the smooth streaming
assets. It appears IIS Media services expects the VOD assets to be local or
accesible via CIFS. Does anyone have experience serving smooth streaming
VOD assets stored in a swift cluster? I have been able to serve the
smooth streaming assets from swift if using Apache and mod_smoothstreaming.
However, not all IIS media functionality is found in the Apache module. So
I am stuck with IIS Media services.

Thanks!


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] OpenStack 2014.1.2 released

Hello everyone,

The OpenStack Stable Maintenance team is happy to announce the release
of the 2014.1.2 stable Icehouse release. We have been busy reviewing and
accepting backported bugfixes to the stable/icehouse branches according
to the criteria set at:
https://wiki.openstack.org/wiki/StableBranch

A total of 157 bugs have been fixed across all projects. These
updates to Icehouse are intended to be low risk with no
intentional regressions or API changes. The list of bugs, tarballs and
other milestone information for each project may be found on Launchpad:
https://launchpad.net/ceilometer/icehouse/2014.1.2https://launchpad.net/cinder/icehouse/2014.1.2https://launchpad.net/glance/icehouse/2014.1.2https://launchpad.net/heat/icehouse/2014.1.2https://launchpad.net/horizon/icehouse/2014.1.2https://launchpad.net/keystone/icehouse/2014.1.2https://launchpad.net/neutron/icehouse/2014.1.2https://launchpad.net/nova/icehouse/2014.1.2https://launchpad.net/trove/icehouse/2014.1.2

Release notes may be found on the wiki:
https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.2

The freeze on the stable/icehouse branches will be lifted today as we
begin working toward the 2014.1.3 release, planned for Oct 2 release
and managed by Adam Gandelman.

Thanks,
chuck


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Impressive job! Thanks!! :-D
IceHouse is a rock solid!

On 7 August 2014 20:19, Chuck Short chuck.short@canonical.com wrote:

Hello everyone,

The OpenStack Stable Maintenance team is happy to announce the release
of the 2014.1.2 stable Icehouse release. We have been busy reviewing and
accepting backported bugfixes to the stable/icehouse branches according
to the criteria set at:
https://wiki.openstack.org/wiki/StableBranch

A total of 157 bugs have been fixed across all projects. These
updates to Icehouse are intended to be low risk with no
intentional regressions or API changes. The list of bugs, tarballs and
other milestone information for each project may be found on Launchpad:
https://launchpad.net/ceilometer/icehouse/2014.1.2https://launchpad.net/cinder/icehouse/2014.1.2https://launchpad.net/glance/icehouse/2014.1.2https://launchpad.net/heat/icehouse/2014.1.2https://launchpad.net/horizon/icehouse/2014.1.2https://launchpad.net/keystone/icehouse/2014.1.2https://launchpad.net/neutron/icehouse/2014.1.2https://launchpad.net/nova/icehouse/2014.1.2https://launchpad.net/trove/icehouse/2014.1.2

Release notes may be found on the wiki:
https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.2

The freeze on the stable/icehouse branches will be lifted today as we
begin working toward the 2014.1.3 release, planned for Oct 2 release
and managed by Adam Gandelman.

Thanks,
chuck


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Wordpress template

Hi All,

I am trying to launch an instance with wordpress application from the heat template downloaded from
https://raw.github.com/openstack/heat-templates/master/cfn/F17/WordPress_Composed_Instances.template

But the following error is being shown:

Error: Resource CREATE failed: BadRequest: Multiple possible networks found, use a Network ID to be more specific.

I have created two networks . One is public for floating point IP and other is private. How to specify the network ID
in the above template ?

Thanks
Kumar


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi All,

Any suggestions ?

Thanks
Kumar

From: Gnan Kumar, Yalla
Sent: Friday, August 08, 2014 10:46 AM
To: openstack@lists.openstack.org
Subject: Wordpress template

Hi All,

I am trying to launch an instance with wordpress application from the heat template downloaded from
https://raw.github.com/openstack/heat-templates/master/cfn/F17/WordPress_Composed_Instances.template

But the following error is being shown:

Error: Resource CREATE failed: BadRequest: Multiple possible networks found, use a Network ID to be more specific.

I have created two networks . One is public for floating point IP and other is private. How to specify the network ID
in the above template ?

Thanks
Kumar


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [energy] How to enable kwapi plugin in Ceilometer ?

Hi all,

I am running devstack with Ceilometer enabled. I am looking to gather
energy and power stats. I have installed kwapi plugin and am able to
retrieve Power numbers via the kwapi-driver.

I needed some help to know as to how to enable gathering of these power
stats in Ceilometer and what are the config changes needed to do on the
Ceilometer side for the same ?

Regards,
Deepthi


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Le 08/08/2014 11:14, Deepthi Dharwar a écrit :

Hi all,

I am running devstack with Ceilometer enabled. I am looking to gather
energy and power stats. I have installed kwapi plugin and am able to
retrieve Power numbers via the kwapi-driver.

I needed some help to know as to how to enable gathering of these power
stats in Ceilometer and what are the config changes needed to do on the
Ceilometer side for the same ?

Regards,
Deepthi

Redirecting to Francois Rossigneux who is the main contributor...

-Sylvain

Re: [Openstack] 26. [energy] How to enable kwapi plugin in Ceilometer ?

Hi Deepthi,

I solved this problem setting up a keystone endpoind for kwapi, you can
find more information here:

http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/keystone-service-endpoint-create.html

After that change your pipeline.yaml file in /etc/ceilometer/ to recognize
the energy meter, that would be something like:

- name: meter_energy
  interval: 300
  meters:
      - "power*"
      - "energy*"
  sinks:
      - meter_sink

Restart the ceilometer service and you should be able to collect the power
and energy metering from kwapi.

BR,
Bruno.
Intern at ICCLab, ZHAW.

  1. [energy] How to enable kwapi plugin in Ceilometer ?
  (Deepthi Dharwar)

Message: 26
Date: Fri, 08 Aug 2014 14:44:57 +0530
From: Deepthi Dharwar deepthi@linux.vnet.ibm.com
To: openstack openstack@lists.openstack.org
Subject: [Openstack] [energy] How to enable kwapi plugin in
Ceilometer ?
Message-ID: 53E49511.9060309@linux.vnet.ibm.com
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

I am running devstack with Ceilometer enabled. I am looking to gather
energy and power stats. I have installed kwapi plugin and am able to
retrieve Power numbers via the kwapi-driver.

I needed some help to know as to how to enable gathering of these power
stats in Ceilometer and what are the config changes needed to do on the
Ceilometer side for the same ?

Regards,
Deepthi



Openstack mailing list
openstack@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

End of Openstack Digest, Vol 14, Issue 9



Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Thanks a lot Bruno!

Unfortunately I am still facing some minor hiccups.

The machine is all-in-one devstack system.
This has my controller as well as my compute node on it.

Outlining the process:

I indeed created a keystone-service endpoint called 'kwapi'

> keystone service-create --name=kwapi --type=metering

--description="Energy"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Energy |
| enabled | True |
| id | b952438819dc4481903455ed9a564c01 |
| name | kwapi |
| type | metering |
+-------------+----------------------------------+

I have my kwapi auth_port set to 5002.

stack@mc3:~$ keystone endpoint-create --region RegionOne
--service-id=b952438819dc4481903455ed9a564c01
--publicurl=http://10.0.0.1:5002/v1/probes
--internalurl=http://10.0.0.1:5002/v1/probes
--adminurl=http://10.0.0.1:5002/v1/probes
+-------------+-------------------------------------+
| Property | Value |
+-------------+-------------------------------------+
| adminurl | http://10.0.0.1:5002/v1/probes |
| id | 81557eac1b4348a882f2391796ee233f |
| internalurl | http://10.0.0.1:5002/v1/probes |
| publicurl | http://10.0.0.1:5002/v1/probes |
| region | RegionOne |
| service_id | b952438819dc4481903455ed9a564c01 |
+-------------+-------------------------------------+

I am able to fetch the energy and power numbers through the REST API and
python rrd tool. But still ceilometer is unable to read to it.
I have appended the /etc/ceilometer/pipeline.yaml with power and energy
meters.

Ceilometer-acompute is erroring out with the following errors:

DEBUG urllib3.connectionpool [-] Setting read timeout to None
makerequest /usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:375
2014-08-11 16:13:25.102 10413 DEBUG urllib3.connectionpool [-] "POST
/v2/tokens HTTP/1.1" 404 93 makerequest
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
2014-08-11 16:13:25.103 10413 ERROR ceilometer.central.manager [-] Skip
interval_task because Keystone error: Authorization Failed: The resource
could not be found. (HTTP 404)

Are there any tweaks in kwapi/api.conf and driver.conf wrt acl and
signing is concerned for ceilometer to talk to kwapi?

Please do let me know.
Regards,
Deepthi

On 08/08/2014 09:01 PM, Bruno Grazioli wrote:
Hi Deepthi,

I solved this problem setting up a keystone endpoind for kwapi, you can
find more information here:

http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/keystone-service-endpoint-create.html

After that change your pipeline.yaml file in /etc/ceilometer/ to
recognize the energy meter, that would be something like:

- name: meter_energy
  interval: 300
  meters:
      - "power*"
      - "energy*"
  sinks:
      - meter_sink

Restart the ceilometer service and you should be able to collect the
power and energy metering from kwapi.

BR,
Bruno.
Intern at ICCLab, ZHAW.

  1. [energy] How to enable kwapi plugin in Ceilometer ?

      (Deepthi Dharwar)
    

    Message: 26
    Date: Fri, 08 Aug 2014 14:44:57 +0530
    From: Deepthi Dharwar <deepthi@linux.vnet.ibm.com
    deepthi@linux.vnet.ibm.com>
    To: openstack <openstack@lists.openstack.org
    openstack@lists.openstack.org>
    Subject: [Openstack] [energy] How to enable kwapi plugin in
    Ceilometer ?
    Message-ID: <53E49511.9060309@linux.vnet.ibm.com
    53E49511.9060309@linux.vnet.ibm.com>
    Content-Type: text/plain; charset=ISO-8859-1

    Hi all,

    I am running devstack with Ceilometer enabled. I am looking to gather
    energy and power stats. I have installed kwapi plugin and am able to
    retrieve Power numbers via the kwapi-driver.

    I needed some help to know as to how to enable gathering of these power
    stats in Ceilometer and what are the config changes needed to do on the
    Ceilometer side for the same ?

    Regards,
    Deepthi



    Openstack mailing list
    openstack@lists.openstack.org openstack@lists.openstack.org
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

    End of Openstack Digest, Vol 14, Issue 9



Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Keystone 2014.1.2

An issue was discovered with Keystone 2014.1.2 stable release tarball
shortly after it was uploaded on the release page at:
https://launchpad.net/keystone/icehouse/2014.1.2

A new tarball corresponding to 2014.1.2 code was generated and uploaded:
md5sum: c8a85fc2ac76679eb1b674e0c2c65e36 keystone-2014.1.2.tar.gz
sha1sum: c1d481d8b330e50da5ace19c23e5909c04fa1cba keystone-2014.1.2.tar.gz

Please double-check that you got the right one.

Regards
chuck


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Please double-check that you got the right one.

And if you fetched keystone.git repository earlier than 15:25 UTC
today, you will need to remove the old tag and fetch new one with:
git tag -d 2014.1.2; git fetch
Correct tag is:
object 6cbf835542d62e6e5db4b4aef7141b1731cad9dc
type commit
tag 2014.1.2
tagger Chuck Short chuck.short@canonical.com 1407511150 -0400

2014.1.2
gpg: Signature made Fri Aug 8 15:19:11 2014 UTC using DSA key ID FA14013B
gpg: Good signature from "Chuck Short chuck.short@canonical.com"

[Openstack] [Swift] Object invalidation after overwrite

Hi,

I have a question regarding the way object overwrites work in the
absence of versioning. I couldn't find this info in the documentation.

Consider the case when I have an object O already present in the Swift
cluster. There are N replicas of this object. When a new PUT request
that overwrites object O is returned, Swift guarantees that the new
object, say O' got written successfully to N/2 + 1 object servers. But
there could be some replicas that still have the older object O.

Does Swift have a way of invalidating the older object O on all object servers?

If there is a GET request for object O, immediately after the
overwrite, does Swift guarantee that the older object O cannot be
returned?

Thanks in advance.
-Shri


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

You are describing one of the ways that Swift does eventual consistency. In the scenario you describe, it is indeed possible to get the older version of the object on a read. There is no whole-cluster invalidation of an object. Swift's behavior here gives you high availability of your data even when you have failures in your cluster. Your proposed scenario can happen if a server fails after the first write and is restored after the second write, but before the read.

However, there are a few things you can do on the client side. One option is to keep track of the etag of the data you've sent. That way you can verify that you're getting back what you expected to get back. Another option would be to use a read request with the If-[Un]Modified-Since header (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html).

--John

On Aug 8, 2014, at 1:18 PM, Shrinand Javadekar shrinand@maginatics.com wrote:

Hi,

I have a question regarding the way object overwrites work in the
absence of versioning. I couldn't find this info in the documentation.

Consider the case when I have an object O already present in the Swift
cluster. There are N replicas of this object. When a new PUT request
that overwrites object O is returned, Swift guarantees that the new
object, say O' got written successfully to N/2 + 1 object servers. But
there could be some replicas that still have the older object O.

Does Swift have a way of invalidating the older object O on all object servers?

If there is a GET request for object O, immediately after the
overwrite, does Swift guarantee that the older object O cannot be
returned?

Thanks in advance.
-Shri


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Nova is choosing the wrong a wrong availability zone for the Cinder volume

Hello,
I have 2 cinder-volume servers, each one with 2 storage backends.
Each of those cinder-volume servers are on different availability zones
(tdc-a and tdc-b), and there are 8 nova-compute servers (4 on tdc-a and
4 on tdc-b)
When I launch a new instance using the zone tdc-a Nova choose the right
zone to place the instance, but the volume some times is placed on
different zone.
I tried with many nova/cinder configurations changing schedulerdriver,
scheduler
defaultfilters but always with the same result.
I'm not sure about how to debug it. I added to
/usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/availability
zone_filter.py
one line to print at the log file what AZ is receiving and looks like
Nova is sending a random zone no matter what zone I have chosen.
I appreciate any help!
Thanks!!

Fernando Cortijo


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] OpenStack sponsorship for OPW - Dec-Mar 2015 internship

Hi,

I'm interested in participating in OPW (Outreach Program for Women) 2014
for Dec 2014 -Mar 2015 internship. Would OpenStack be a participating
organization this time around too? (I see that it was for May-Aug:
https://wiki.gnome.org/OutreachProgramForWomen/2014/MayAugust#Participating_Organizations
).

I'm currently trying to understand Swift program of OpenStack and
interested in exploring that. I wanted to know how relevant is Swift for
this round of OPW. Is Swift looking to be a part of it?

How relevant would it be for OPW if i continue to explore and learn Swift?

Thank you in advance.

-Mahati


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I honestly have no idea how this works (the actual logistics), but speaking from the Swift perspective within OpenStack, I'd love to help out in any way I can.

--John

On Aug 8, 2014, at 7:41 PM, Mahati C mahati.chamarthy@gmail.com wrote:

Hi,

I'm interested in participating in OPW (Outreach Program for Women) 2014 for Dec 2014 -Mar 2015 internship. Would OpenStack be a participating organization this time around too? (I see that it was for May-Aug: https://wiki.gnome.org/OutreachProgramForWomen/2014/MayAugust#Participating_Organizations).

I'm currently trying to understand Swift program of OpenStack and interested in exploring that. I wanted to know how relevant is Swift for this round of OPW. Is Swift looking to be a part of it?

How relevant would it be for OPW if i continue to explore and learn Swift?

Thank you in advance.

-Mahati


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Error: Unable to retrieve instance list.

Hello,

 

“Error: Unable to retrieve instance list”  is showing while I was trying to launch an instance. Please let me know the reason and remedy.

 

Best regards,

 

Giles Cornelius Gomes

Database Administrator (Billing)
NovoTel Limited
House: Ga-30/G, Pragati Sarani
Shahjadpur, Gulshan-2
Dhaka-1212, Bangladesh

Tel: +880-2-8899657; Ext: 212
Fax: +880-2-8899654
Web: www.novotel-bd.com


Disclaimer:

This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies and the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this e-mail or any action taken in reliance on this e-mail is strictly prohibited and may be unlawful.

 

Hi Giles,

I am not sure about the issue, but can you please check if the database is up. Also can you paste the error in nova logs?

Regards,

Sushma Korati

sushma_korati@persistent.co.in

Persistent Systems Ltd. |  Partners in Innovation | www.persistentsys.com

P Please consider your environmental responsibility: Before printing this e-mail or any other document, ask yourself whether you need a hard copy.

From: Giles Cornelius Gomes Giles.Cornelius@novotel-bd.com
Sent: Sunday, August 10, 2014 3:42 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Error: Unable to retrieve instance list.

Hello,

“Error: Unable to retrieve instance list”  is showing while I was trying to launch an instance. Please let me know the reason and remedy.

Best regards,

Giles Cornelius Gomes

Database Administrator (Billing)
NovoTel Limited
House: Ga-30/G, Pragati Sarani
Shahjadpur, Gulshan-2
Dhaka-1212, Bangladesh

Tel: +880-2-8899657; Ext: 212
Fax: +880-2-8899654
Web: www.novotel-bd.com


Disclaimer:

This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies and the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this e-mail or any action taken in reliance on this e-mail is strictly prohibited and may be unlawful.

DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.

[Openstack] Error: Failed to launch instance "devstack-havana": Please try again later [Error: No valid host was found. ].

I am getting following error while launching an instance from dashboard.

 

Error: Failed to launch instance "devstack-havana": Please try again later [Error: No valid host was found. ].

 

Please let me know the reason and remedy.

 

Best regards,

 

Giles Cornelius Gomes

Database Administrator (Billing)
NovoTel Limited
House: Ga-30/G, Pragati Sarani
Shahjadpur, Gulshan-2
Dhaka-1212, Bangladesh

Tel: +880-2-8899657; Ext: 212
Fax: +880-2-8899654
Web: www.novotel-bd.com


Disclaimer:

This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies and the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this e-mail or any action taken in reliance on this e-mail is strictly prohibited and may be unlawful.

 

From:

http://openstack.redhat.com/forum/discussion/952/problem-creating-instances-no-valid-host/p1

Check subnet availability. Also:

No valid host means the scheduler was unable to find a compute host suitable for booting your new instance. A common cause of this is not enough memory on your compute nodes, so check that. Also make sure that your compute nodes have the 

brctl command available, which is necessary to set up networking for the instance -- if that command is missing, you will also see this error.

To help identify the source of the problem, you'll want to make sure you have debug logging turned on (

debug=True in 

/etc/nova/nova.conf) on both your controller and your compute node. Pay careful attention to the messages logged when an instance is booting; generally somewhere in there will be the source of the problem.

Regards,

Shariar Kazi

On Sun, Aug 10, 2014 at 7:29 AM, Giles Cornelius Gomes Giles.Cornelius@novotel-bd.com wrote:

I am getting following error while launching an instance from dashboard.

Error: Failed to launch instance "devstack-havana": Please try again later [Error: No valid host was found. ].

Please let me know the reason and remedy.

Best regards,

Giles Cornelius Gomes

Database Administrator (Billing)
NovoTel Limited
House: Ga-30/G, Pragati Sarani
Shahjadpur, Gulshan-2
Dhaka-1212, Bangladesh

Tel: +880-2-8899657; Ext: 212
Fax: +880-2-8899654
Web: www.novotel-bd.com


Disclaimer:

This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies and the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this e-mail or any action taken in reliance on this e-mail is strictly prohibited and may be unlawful.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] even with 60G qcow2 image " FlavorDiskTooSmall: Flavor's disk is too small for requested image. " for m1.small

Hi,
i am using Centos6.5 with Icehouse release,on KVM Hypervisor,

Trying to launch Centos instance from a newly uploaded qcow2 image.

Created centos6.5 image in qcow2 format with 60G virtual size and actual
size of 1.7G

image: Centos6.5x64.qcow2
file format: qcow2
virtual size: 60G (64424509440 bytes)
disk size: 1.7G
cluster
size: 65536

after uploading to glance

glance image-list

glance image-list
+--------------------------------------+---------------------+-------------+------------------+------------+--------+
| ID | Name | Disk Format
| Container Format | Size | Status |
+--------------------------------------+---------------------+-------------+------------------+------------+--------+
| 34b7dc50-51b2-4c01-9605-444f4f01f02d | centos6.5 | qcow2
| bare | 1851195392 | active |

When an instance is created from above image with flavor m1.small ie
with 20.0GB
Disk

compute.log shows
FlavorDiskTooSmall: Flavor's disk is too small for requested image.
any idea why is it so,even with disk size 60Gb > 20G

Can this be a Bug

"Cirros works fine"
Thanks,


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 10 August 2014 09:55, mad Engineer themadengin33r@gmail.com wrote:

Hi,
i am using Centos6.5 with Icehouse release,on KVM Hypervisor,

Trying to launch Centos instance from a newly uploaded qcow2 image.

Created centos6.5 image in qcow2 format with 60G virtual size and actual size of 1.7G

image: Centos6.5x64.qcow2
file format: qcow2
virtual size: 60G (64424509440 bytes)
disk size: 1.7G
cluster
size: 65536

after uploading to glance

glance image-list

glance image-list
+--------------------------------------+---------------------+-------------+------------------+------------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------+-------------+------------------+------------+--------+
| 34b7dc50-51b2-4c01-9605-444f4f01f02d | centos6.5 | qcow2 | bare | 1851195392 | active |

When an instance is created from above image with flavor m1.small ie with 20.0GB Disk

compute.log shows
FlavorDiskTooSmall: Flavor's disk is too small for requested image.
any idea why is it so,even with disk size 60Gb > 20G

Can this be a Bug

"Cirros works fine"
Thanks,

It must allocate the 60GB that it might grow to.
Instead, make the disk be 2GB (qemu-img create centos6.5.qcow 2G),
install in it, and install the growfs info.
When you launch from glance to nova, it will auto-size to whatever
disk you give it. Use a flavor with min-disk 2G.

Now someone can launch it @ 2G, have ~300MB free, or launch it as 20G
and have ~18GB free, their call.

But the way you have it, you must use a flavor of min-disk >= 60.

Not a bug.

Re: [Openstack] VM bootting hang on OpenStack Icehouse + Centos 6.4 64bit + KVM

Thanks all for help. That was all about my wrong config virt-type=qemu, it
should be kvm. Poor me.

2014-08-05 18:18 GMT+07:00 dhanesh1212121212 dhanesh1212@gmail.com:

Hi,

Please paste the steps followed for creating centos.img.
Site downloaded ubuntu.img.

Note: VT is enabled or not. Check nova log for info

Regards,
Dhanesh M.

On Tue, Aug 5, 2014 at 8:19 AM, ZHOU TAO A tao.a.zhou@alcatel-lucent.com
wrote:

What's your centOS kernel version?

On 08/04/2014 06:25 PM, Anh Tu Nguyen wrote:

Hi,
Cirros run perfectly.

Added acpi=off, still remain the same error:

[ 1.256277] init[1] trap invalid opcode ip:7f7a015dea0b sp:7fff3ebac9e8 error:0 in libc.so.6[7f7a015bd000+1b5000]
[ 1.258860] Kernel panic - not syncing: Attempted to kill init!
[ 1.259137] Pid: 1, comm: init Not tainted 3.2.0-36-virtual #57-Ubuntu
[ 1.259298] Call Trace:
[ 1.259779] [] panic+0x91/0x1a4

Quite strange...

2014-08-04 17:16 GMT+07:00 dhanesh1212121212 dhanesh1212@gmail.com:

Hi,

Have u tried booting default cirros image?

please try this option

http://askubuntu.com/questions/139157/booting-ubuntu-12-04-with-acpi-off-grub-parameter
.

Note: Ignore if already tried

Regards,
Dhanesh M

On Mon, Aug 4, 2014 at 1:46 PM, Anh Tu Nguyen ng.tuna11@gmail.com
wrote:

Thanks Dhanesh, I tried this before. Still hang... EDD is just a
notice, not a issue I think.

I'm not sure but look:

[ 1.035496] init[1] trap invalid opcode ip:7fc17cab71b2 sp:7fffae373780 error:0 in libc.so.6[7fc17ca98000+17e000]
[ 1.044709] Kernel panic - not syncing: Attempted to kill init!
[ 1.045096] Pid: 1, comm: init Not tainted 2.6.32-64-server #128-Ubuntu
[ 1.045410] Call Trace:
[ 1.045815] [] panic+0x78/0x139
[ 1.046049] [] forgetoriginalparent+0x30d/0x320
[ 1.046349] [] ? putfilesstruct+0xc4/0xf0
[ 1.046619] [] exitnotify+0x1b/0x1b0
[ 1.046851] [] do
exit+0x1c0/0x390
[ 1.047072] [] dogroupexit+0x55/0xd0
[ 1.047324] [] getsignaltodeliver+0x1d7/0x3d0
[ 1.047607] [] do
signal+0x75/0x1c0
[ 1.047835] [] ? doinvalidop+0x95/0xb0
[ 1.048077] [] donotifyresume+0x5d/0x80
[ 1.048567] [] retint_signal+0x48/0x8c

I guess that problem comes from init error.

Need more help.

2014-08-04 13:59 GMT+07:00 dhanesh1212121212 dhanesh1212@gmail.com:

Hi ,

please try this in kernel line. edd=off.

refer the site below.

https://access.redhat.com/solutions/47621

Regards,
Dhanesh M

On Mon, Aug 4, 2014 at 11:23 AM, Anh Tu Nguyen ng.tuna11@gmail.com
wrote:

Hi guys,

I'm deploying OpenStack Icehouse with KVM on Centos 6.4 64bit.
Everything is good now. However, I can't boot VM (both Centos and Ubuntu).
Ubuntu images I downloaded from https://cloud-images.ubuntu.com.
Centos was built by myself.

All VMs stuck at booting console, here are startup logs I copy from
the log tab:

Centos 6.5: https://gist.github.com/ngtuna/2f3065b8d48e462e458c
Ubuntu Lucid: https://gist.github.com/ngtuna/6b60f94ff4b768766c6b

On console, I see all VMs hang after this line: "Probing EDD
(edd=off to disable)... OK"

I found a post aboud adding "nomodeset" but unsuccessful for me.

http://blog.scottlowe.org/2014/07/18/fix-for-strange-issue-booting-kvm-guests/

That's very strange. Please help,
---Tuna


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
---Tuna

--
---Tuna


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
---Tuna


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] use rdo install neutron ml2+vxlan

hi all:
I just use rdo to install neutron, I have some questions

After I finished.
I run this command,I can not see br-tun
ovs-vsctl show
876d6e88-b3fc-4709-8e29-5e61fbbd1001
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovsversion: "1.11.0"
but I then run this command again,br-tun appear now,what happen about this
ovs-vsctl show
876d6e88-b3fc-4709-8e29-5e61fbbd1001
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs
version: "1.11.0"

I run this command,just see LOCAL(br-tun):,is that normal ?I think the br-tun must have the port to br-int?
ovs-ofctl show br-tun
OFPTFEATURESREPLY (xid=0x2): dpid:0000a298acde9a47
ntables:254, nbuffers:256
capabilities: FLOWSTATS TABLESTATS PORTSTATS QUEUESTATS ARPMATCHIP
actions: OUTPUT SETVLANVID SETVLANPCP STRIPVLAN SETDLSRC SETDLDST SETNWSRC SETNWDST SETNWTOS SETTPSRC SETTPDST ENQUEUE
LOCAL(br-tun): addr:a2:98:ac:de:9a:47
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
OFPT
GETCONFIGREPLY (xid=0x4): frags=normal misssendlen=0

any people can give me some help,thanks

ttjiang_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Actually br-int and br-tun are connected through veth pair. it's already created in your case, see interface "patch-tun" in br-int and "patch-int" in br-tun

-Nathan.nxu
发件人: ttjiang
发送时间: 2014-08-11 10:28
收件人: openstack@lists.openstack.org
主题: [Openstack] use rdo install neutron ml2+vxlan
hi all:
I just use rdo to install neutron, I have some questions

After I finished.
I run this command,I can not see br-tun
ovs-vsctl show
876d6e88-b3fc-4709-8e29-5e61fbbd1001
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovsversion: "1.11.0"
but I then run this command again,br-tun appear now,what happen about this
ovs-vsctl show
876d6e88-b3fc-4709-8e29-5e61fbbd1001
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs
version: "1.11.0"

I run this command,just see LOCAL(br-tun):,is that normal ?I think the br-tun must have the port to br-int?
ovs-ofctl show br-tun
OFPTFEATURESREPLY (xid=0x2): dpid:0000a298acde9a47
ntables:254, nbuffers:256
capabilities: FLOWSTATS TABLESTATS PORTSTATS QUEUESTATS ARPMATCHIP
actions: OUTPUT SETVLANVID SETVLANPCP STRIPVLAN SETDLSRC SETDLDST SETNWSRC SETNWDST SETNWTOS SETTPSRC SETTPDST ENQUEUE
LOCAL(br-tun): addr:a2:98:ac:de:9a:47
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
OFPT
GETCONFIGREPLY (xid=0x4): frags=normal misssendlen=0

any people can give me some help,thanks

ttjiang_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] [nova] Libvirt driver domain metadata - add instance metadata dictionary?

On Fri, Aug 01, 2014 at 03:47:48PM -0500, Matt Riedemann wrote:

On 7/31/2014 6:58 AM, Markus Zoeller wrote:

The blueprint "libvirt-driver-domain-metadata" introduces some of the
istances properties to the libvirt.xml file. For example the name
of the instance, the name of the flavor and the creation date.

Would it make sense to add the instance.metadata dictionary also?

API: /v2/​{tenantid}​/servers/​{serverid}​/metadata
Code: https://github.com/openstack/nova/blob/master/
nova/objects/instance.py#L148

You could ask danpb in #openstack-nova IRC about his thoughts, but looking
at the spec and code it looks like a specific metadata schema was in mind.
The metadata that a user can pass in when spawning an instance is arbitrary
so it wouldn't really fit into the schema created unless that was modified
to add some custom values, which would be the user metadata.

Is there a use case for putting user metadata in there? Looks like the
blueprint is for adding specific metadata so an admin can correlate his
libvirt domains against nova API calls.

The intent was primarily to aid in debugging libvirt by providing information
that is/was relevant to the libvirt guest configuration.

The instance metadata dict is not something that affects libvirt - IIRC it
is only relevant to the guest OS, so i don't think it is relevant to include
in the libvirt XML

Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

"Daniel P. Berrange" berrange@redhat.com wrote on 08/11/2014 11:39:25
AM:

From: "Daniel P. Berrange" berrange@redhat.com
To: Matt Riedemann mriedem@linux.vnet.ibm.com
Cc:
Date: 08/11/2014 11:49 AM
Subject: Re: [Openstack] [nova] Libvirt driver domain metadata - add
instance metadata dictionary?

On Fri, Aug 01, 2014 at 03:47:48PM -0500, Matt Riedemann wrote:

On 7/31/2014 6:58 AM, Markus Zoeller wrote:

The blueprint "libvirt-driver-domain-metadata" introduces some of the
istances properties to the libvirt.xml file. For example the name
of the instance, the name of the flavor and the creation date.

Would it make sense to add the instance.metadata dictionary also?

API: /v2/​{tenantid}​/servers/​{serverid}​/metadata
Code: https://github.com/openstack/nova/blob/master/
nova/objects/instance.py#L148

You could ask danpb in #openstack-nova IRC about his thoughts, but
looking
at the spec and code it looks like a specific metadata schema was in
mind.
The metadata that a user can pass in when spawning an instance is
arbitrary
so it wouldn't really fit into the schema created unless that was
modified
to add some custom values, which would be the user metadata.

Is there a use case for putting user metadata in there? Looks like
the
blueprint is for adding specific metadata so an admin can correlate
his
libvirt domains against nova API calls.

The intent was primarily to aid in debugging libvirt by providing
information
that is/was relevant to the libvirt guest configuration.

The instance metadata dict is not something that affects libvirt - IIRC
it
is only relevant to the guest OS, so i don't think it is relevant to
include
in the libvirt XML

Maybe the direction I'm heading is wrong. My intention is to enable a
correlation
between multiple libvirt domains independent from their flavor.
E.g.
-------------- -------------- --------------
| Server: A | | Server: B | | Server: C |
| Flavor: X | | Flavor: X | | Flavor: Y |
| Group: foo | | Group: bar | | Group: foo |
-------------- -------------- --------------

I'd like to enable the hypervisor to understand that server A and C are
correlated
because of the same "Group: foo". Is there already another mechanism which
enables
that?

[Openstack] [TripleO] Adding a new node to instack RDO setup

Hi all,

I have successfully set up tripleo using instack RDO in a virtual
machine environment. My undercloud and overcloud all are in VMs.
Now, I need to add a new physical baremetal node to the setup. I 
have the mac address of the node to be added. I registered the node
and in the openstack deployment tab I tried to add 1 free node as 
compute node. But as I click deploy, I get the following error:

"HTTPResponse insatnce has no attribute 'headers' "

Any ideas on how to resolve this error?
 
Regards,
~Peeyush Gupta_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] Sahara: Change Java heap space

Hi,

which plugin are you using?

On Tue, Jul 29, 2014 at 6:42 AM, Dat Tran dattbbk@gmail.com wrote:

Hi everybody,

I have created a cluster hadoop (When create Node group template, I have
modified mapred.child.java.opts: "-Xmx200m" to "-Xmx1024m"). Then i ssh to
master-instance and run: "hadoop jar hadoop-examples-1.2.1.jar pi 10 100", I
get the error:
java.lang.OutOfMemoryError: Java heap space

Check file /etc/hadoop/hadoop-env.sh. I see:
Default: export HADOOPCLIENTOPTS="-Xmx128m HADOOPCLIENTOPTS"

Then, I edit 128 to 1024:
export HADOOPCLIENTOPTS="-Xmx1024m $HADOOPCLIENTOPTS"

When run "hadoop jar hadoop-examples-1.2.1.jar pi 10 100", it's worked! But,
my cluster still in a state: "Watting" and ip_master:50030/jobtracker.jps
not running?

Why HADOOPCLIENTOPTS in /etc/hadoop/hadoop-env.sh not automatic update
when create cluster?

What is the problem here? Thank you very much!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

I used vanilla plugin: sahara-icehouse-vanilla-1.2.1-ubuntu-13.10.qcow2

Thanks.

2014-08-11 19:25 GMT+07:00 Sergey Lukjanov slukjanov@mirantis.com:

Hi,

which plugin are you using?

On Tue, Jul 29, 2014 at 6:42 AM, Dat Tran dattbbk@gmail.com wrote:

Hi everybody,

I have created a cluster hadoop (When create Node group template, I have
modified mapred.child.java.opts: "-Xmx200m" to "-Xmx1024m"). Then i ssh
to
master-instance and run: "hadoop jar hadoop-examples-1.2.1.jar pi 10
100", I
get the error:
java.lang.OutOfMemoryError: Java heap space

Check file /etc/hadoop/hadoop-env.sh. I see:
Default: export HADOOPCLIENTOPTS="-Xmx128m HADOOPCLIENTOPTS"

Then, I edit 128 to 1024:
export HADOOPCLIENTOPTS="-Xmx1024m $HADOOPCLIENTOPTS"

When run "hadoop jar hadoop-examples-1.2.1.jar pi 10 100", it's worked!
But,
my cluster still in a state: "Watting" and ip_master:50030/jobtracker.jps
not running?

Why HADOOPCLIENTOPTS in /etc/hadoop/hadoop-env.sh not automatic update
when create cluster?

What is the problem here? Thank you very much!


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Core pinning feature

Hallo all,

Can I kindly know whether the core-pinning feature is enabled in OpenStack? Can VMs be mapped to a specific physical core during the time of its creation?

Regards,
Krishnaprasad


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

This work is currently ongoing, with many of the related blueprints
scheduled to be implemented by the Juno release.

Here are a couple of the related blueprints:

https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement
https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages

Best,
-jay

On 08/11/2014 09:46 AM, Narayanan, Krishnaprasad wrote:
Hallo all,

Can I kindly know whether the core-pinning feature is enabled in
OpenStack? Can VMs be mapped to a specific physical core during the time
of its creation?

Regards,

Krishnaprasad


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [OSSN 0022] Nova Networking does not enforce security group rules following a soft reboot of an instance

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Nova Networking does not enforce security group rules following a soft
reboot of an instance


Summary

In deployments using Nova Networking, security group rules associated
with an instance may not be enforced after a soft reboot. Nova is
designed to apply the configured security group rules to an instance
when certain operations are performed, such as a normal boot operation.
If an operation has been performed that results in the clearing of
security group rules, such as restarting the nova compute service, then
performing a soft reboot of that instance will cause it to be
started without security group rules being applied.

Deployments using Neutron are not impacted.

Affected Services / Software

Nova, Havana, Grizzly

Discussion

In Nova deployments using Nova Networking, security groups are
implemented using iptables, which is used to configure and control
network traffic into Nova instances. When an instance is first booted
using the normal boot method (nova boot ), the security
group rules are applied to that instance.

When an instance is rebooted using the soft reboot method (nova reboot
), the security group rules are not reapplied since they
should have been already applied when the instance was initially
booted. If the security group rules have not been applied following an
event that resulted in their clearing, such as restarting the compute
service, the instance will be brought up without security group
enforcement. This situation is most likely to arise in cases where the
Nova compute service has been terminated or restarted, which removes
all iptables rules. If a stopped instance is then started by using a
soft reboot, it will not have any security group rules applied. A hard
reboot (nova reboot --hard ) reapplies the security group
rules, so it is not susceptible to this issue.

Depending on the deployment architecture, this could breach security
assumptions and leave an instance vulnerable to network based attacks.

This issue only affects the Havana and Grizzly releases. The Icehouse
release does not allow a stopped instance to be started using a soft
reboot, therefore this issue does not affect the Icehouse release.

Recommended Actions

Do not to use the soft reboot method to start instances from the
stopped state. If instances are in the stopped state, boot using "nova
boot " or reboot using "nova reboot --hard "
to force the security group rules to be applied.

Contacts / References

This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0022
Original LaunchPad Bug : https://bugs.launchpad.net/nova/+bug/1316822
OpenStack Security ML : openstack-security@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJT6MtPAAoJEJa+6E7Ri+EVthwH/3pSgRq5x0CA+ABayLD2DW6s
QbNXoPbg419xx4uqBr00yfKLiSiBNNaVXSjYgjFGhqTBm2zN/KdalsizWwg0fRkR
AlNUaTGDcAYWnBV/FZhpRumCUsm8N+7xim9zj4nkXKaDduUnW88ytptHcwgAp0gR
9uqoTU88e1+g1ALJ8DwKWc9GdjjO1cMzI1ujPG68tKDpFiTS1/L9R3eBwenK0g6P
kKsiLgjJCRjXNyfmj/+IPZRIEQm21QsnIFl0bwu+E3w4LdoVk1rvmfPJL5vdIoYU
I69Vra4ZVya5X6PJ2RjPFHf4uWUIN2xI6J8ipxljY8bfHAQRVgw72SuUjTAcrLs=
=xO84
-----END PGP SIGNATURE-----


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] SWIFT - max file/object limits and tombstone question

By default the maximum object size is 5G. Outside of increased replication
times, would there be any impacts if I increase that value to 10G? The
reason is I have to store upto 10G media files. Using the large file
manifest just isnt going to work given Smooth Streaming or Adboe HDS
delivery.

For Apple HLS delivery the media files are stored with a .ts extension.
Would that cause any conflicts with tombstone files? Would Swfit
mistakenly mark these media files as candiates for deletion? What would
happen to a HLS .ts file once it needs to be marked as a tombstone file?

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

The 5gb is not a swift limitation. You may want to use -s segmentation to go over the 5gb.

Remo
On Aug 11, 2014, at 13:38, Brent Troge brenttroge2016@gmail.com wrote:

By default the maximum object size is 5G. Outside of increased replication times, would there be any impacts if I increase that value to 10G? The reason is I have to store upto 10G media files. Using the large file manifest just isnt going to work given Smooth Streaming or Adboe HDS delivery.

For Apple HLS delivery the media files are stored with a .ts extension. Would that cause any conflicts with tombstone files? Would Swfit mistakenly mark these media files as candiates for deletion? What would happen to a HLS .ts file once it needs to be marked as a tombstone file?

Thanks!

!DSPAM:1,53e902e246392034611479! _______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

!DSPAM:1,53e902e246392034611479!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Problem with Keystone on Debian 7

Hi all,

I'm new to Openstack, and now have some problems with the installation
of OS Havana on Debian 7.
I used the official document for Debian
7(http://docs.openstack.org/icehouse/install-guide/install/apt-debian/content/)
word-by-word, but there are some strange errors, I can't explain.

I am trying to control my user-role-list via keystone user-role-list,
but I just get

WARNING: Bypassing authentication using a token & endpoint
(authentication credentials are being ignored).
Unknown Attribute: authtenantid

keystone user-list and keystone role-list work perfect.

There are no hints or any output in logfile (debug and verbose logging
enabled).

Any hints or suggestions?
Thanks
Dan


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Tue, Aug 12, 2014 at 4:42 PM, Daniel Spiekermann <
daniel.spiekermann@fernuni-hagen.de> wrote:

Hi all,

I'm new to Openstack, and now have some problems with the installation of
OS Havana on Debian 7.
I used the official document for Debian 7(http://docs.openstack.org/
icehouse/install-guide/install/apt-debian/content/) word-by-word, but
there are some strange errors, I can't explain.

I am trying to control my user-role-list via keystone user-role-list, but
I just get

WARNING: Bypassing authentication using a token & endpoint (authentication
credentials are being ignored).
Unknown Attribute: authtenantid

keystone user-list and keystone role-list work perfect.

There are no hints or any output in logfile (debug and verbose logging
enabled).

Any hints or suggestions?
Thanks
Dan


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack

When using the OSSERVICETOKEN, this error is expected.
To run keystone user-role-list, you need to create an admin account and use
that account's user/password to do auth.

--
YY Inc. is hiring openstack and python developers. Interested? Check
http://soa.game.yy.com/jobs.html

--
Thanks,
Yuanle


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Suggestions on network setup with OpenStack on VirtualBox?

Hi,

I'm trying to setup a devstack env on VirtualBox. Anybody have some
recommendations regarding the Neutron network setting? Any best
practices? GRE, VLAN, VXLAN, FLAT, ...?

Thanks in advance!

Regards,
Qiming


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 08/12/2014 03:09 PM, Qiming Teng wrote:
I'm trying to setup a devstack env on VirtualBox. Anybody have some
recommendations regarding the Neutron network setting? Any best
practices? GRE, VLAN, VXLAN, FLAT, ...?

Have a look at https://github.com/openstack-dev/devstack-vagrant or
https://github.com/berendt/vagrant-devstack.

Christian.

--
Christian Berendt
Cloud Computing Solution Architect
Mail: berendt@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

[Openstack] Need help on GLANCE configuration on UBUNTU 12.04

Hi All,

I am in the mid of OpenStack installation on UBUNTU 12.04. After installing and configuring GLANCE, when I tested my configuration by running the command 'glance index', it is showing below error:

openstack2@ubuntu:~$ glance index
Failed to show index. Got error:
You are not authenticated.
openstack2@ubuntu:~$

Can you please let me know what could the reason be behind or how I can fix the issue? Please let me know any alternative way through I could check whether my GLANCE configuration is correct or not?

Best Regards,

ALOK K. SINGH


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hello Alok,

check Keystone service and endpoint you created for glance , are they
correct , , if they are correct , check your credentials you are providing
while running glance index.

Try this command for export this to your environment

glance --os-username=<> --os-password=<> --os-tenant-name=
--os-auth-url=http://ip for keystone:5000/v2.0
index

Regards,
Ritesh Nanda

On Tue, Aug 12, 2014 at 9:29 AM, Alok Kumar Singh aloksingh@axway.com
wrote:

Hi All,

I am in the mid of OpenStack installation on UBUNTU 12.04. After
installing and configuring GLANCE, when I tested my configuration by
running the command ‘glance index’, it is showing below error:

openstack2@ubuntu:~$ glance index

Failed to show index. Got error:

You are not authenticated.

openstack2@ubuntu:~$

Can you please let me know what could the reason be behind or how I can
fix the issue? Please let me know any alternative way through I could check
whether my GLANCE configuration is correct or not?

Best Regards,

ALOK *K**. **S*INGH


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--

[Openstack] Syntactical Restrictions on API visible names

The "name" attribute is visible and used in many API calls. Identity API V3
defines the following: domain, role, user, project group.

Are there any further syntactical restrictions on "name" attribute ? For
example,

Names can contain ASCII letters 'a' through 'z', the digits '0' through
'9', name between 1 & 63 characters long, can't start or end with "-" and
can't start with digit.

For e.g. are there any IETF RFC standards ( like RFC 1123 ) that name is
required to comply with ?

Right now, I can't tell because the type of the "name" attribute in API
documentation is xsd:string.

Thanks,
--
Sekhar Vajjhala


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Neutron "dhcp_domain" per tenant?

Guys,

Is it possible to configure a different domain at dhcp_domain for each
Tenant?

Tks!
Thiago


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Urgent! Just upgraded to IceHouse 2014.1.2 on Ubuntu 14.04, metadata disapeared + namespace not found

Guys,

I just upgraded my Icehouse (using apt-get update ;apt-get dist-upgrade) and both dhcp agent and metadata agent died.

dhcp-agent.og:


2014-08-13 14:00:15.113 3419 TRACE neutron.agent.dhcp_agent Stderr: 'Cannot
open network namespace "qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795": No
such file or directory\n'

Full log: http://paste.openstack.org/show/94504/

I'm using DHCP (for IPv4), and static IPv6, it was working like a charm
until now.

But the Namespace is here, look:


root@psuaf-1:~# ip netns exec qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795 ip
l
1: lo: <LOOPBACK,UP,LOWERUP> mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
22: tap216bf57d-e8: <BROADCAST,UP,LOWER
UP> mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT group default
link/ether fa:16:3e:2d:67:53 brd ff:ff:ff:ff:ff:ff

What can I do?!

I did not touched the configuration files, I mean, the config files are
there, untouched.

This is a production cloud and it is impossible to create new instances
right now... :-/

Tks in advance!

Regards,
Thiago


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Wed, Aug 13, 2014 at 1:12 PM, Martinx - ジェームズ thiagocmartinsc@gmail.com
wrote:

neutron.agent.dhcp_agent Stderr: 'Cannot open network namespace
"qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795": No such file or directory\n'

Make sure openvswitch is started?
try: ps -gaux | grep openvswitch
then: service openvswitch-switch restart
Gluck!

Regards,

Shariar Kazi


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Neutron metadata agent and HA

Hello,

I want to have openstack with two controller nodes (with nova services). On
compute node there are neutron agents. There is also neutron metadata agent on
each host. I know that it is only some kind of proxy for metadata service in
nova and on my infra that nova's service is running on both controller nodes.
Now my question is: is it possible in some way to make this metadata service
with HA? In neutron-metadata agent I can put only one IP address so I can have
only one metadata nova service. Maybe You know any solution to that?
I'm using havana release of openstack.

--
Best regards,
Sławek Kapłoński
slawek@kaplonski.pl

--
Klucz GPG można pobrać ze strony:
http://kaplonski.pl/files/slawek_kaplonski.pub.key
--
My public GPG key can be downloaded from:
http://kaplonski.pl/files/slawek_kaplonski.pub.key_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] How to "share" a SSH Key Pair between users of the same Project / Tenant...

Guys,

How can I add a SSH Key Pair to a Project / Tenant, that is available to
all users of that Project / Tenant?

Thanks!
Thiago


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Wed, Aug 13, 2014 at 3:24 PM, Martinx - ジェームズ thiagocmartinsc@gmail.com
wrote:

Guys,

How can I add a SSH Key Pair to a Project / Tenant, that is available to
all users of that Project / Tenant?

​Just copy the private key ​to every user and they can reference it when
connecting:

ssh -i VM-shared-key virtualMachine

--
Marcelo

"¿No será acaso que esta vida moderna está teniendo más de moderna que de
vida?" (Mafalda)


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Multiple network and nodes

For the sake of redundancy, is it possible to have multiple network and admin nodes in an Openstack setup?  If so, how is that managed?

 

Dan O'Reilly

UNIX Systems Administration

9601 S. Meridian Blvd.

Englewood, CO 80112

720-514-6293

 

 

Yes, look at OpenStack HA.  There's some extra software required, but it works... :-)

Cheers,
Tudor. 

On 2014-08-14 05:16, O'Reilly, Dan wrote:

For the sake of redundancy, is it possible to have multiple network and admin nodes in an Openstack setup?  If so, how is that managed?

Dan O'Reilly

UNIX Systems Administration

9601 S. Meridian Blvd.

Englewood, CO 80112

720-514-6293

[Openstack] Swift And Erasure Code Storage

With a 100% 'Erasure Code' policy how much extra storage is needed to
satisfy a 1PB usable cluster ?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

The answer totally depends on how you choose to configure EC. We generally refer to the “extra storage” as the overhead of the durability policy. For triple replication, obviously its 3x (you buy 3x the usable capacity you need). For EC it depends not only on which scheme you choose but the partakers that you configure. For example Swift will support a few different Read Solomon schemes out of the box (when its done) and from there you can choose the ratio of data:parity such as 10:4 where you’d have 14 total disks, 10 of them for data and 4 for parity. In this scheme you could lose 4 disks and still recover your data and your overhead would be 1.4 (14/10) as opposed to triple replication of 3 (3/1)

-Paul

From: Brent Troge [mailto:brenttroge2016@gmail.com]
Sent: Wednesday, August 13, 2014 2:41 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Swift And Erasure Code Storage

With a 100% 'Erasure Code' policy how much extra storage is needed to satisfy a 1PB usable cluster ?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [keystone][tripleO devtest]

Hi, All:
I've deployed an undercloud baremetal with tripleO devtest. But
when I was trying to add endpoints to keystone and use other services like
nova and glance, it always has certificate error.* I can not use any
service except keystone in undercloud machine when I source undercloudrc.*
Please help me .
It always has “Verify error: Command 'openssl' returned non-zero
exit status 4
”.
I save the token in one file, and run "sudo openssl cms -verify
-certfile /tmp/keystone-signing-R8K0yX/signing_cert.pem -CAfile
/tmp/keystone-signing-R8K0yX/
cacert.pem -inform PEM -nosmimecap -nodetach
-nocerts -noattr -in token.txt
”, it reports error “Verification failure
3074320060:error:2E099064:CMS
routines:CMSSIGNERINFOVERIFYCERT:certificate verify
error:cms
smime.c:304:Verify error:certificate is not yet valid
"

Below are error logs:

2010-08-28 04:04:25.738 16927 DEBUG keystonemiddleware.authtoken [-]
Server reports support for api versions: v3.0, v2.0 _get
supportedversions
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py:1253

2010-08-28 04:04:25.738 16927 INFO keystonemiddleware.auth_token [-] Auth
Token confirmed use of v3.0 apis

2010-08-28 04:04:25.748 16927 DEBUG keystonemiddleware.auth_token [-]
data(-----BEGIN CMS-----

MIIBygYJKoZIhvcNAQcCoIIBuzCCAbcCAQExDTALBglghkgBZQMEAgEwHgYJKoZI

hvcNAQcBoBEED3sicmV2b2tlZCI6IFtdfTGCAYEwggF9AgEBMFgwUzELMAkGA1UE

BhMCWFgxDjAMBgNVBAgTBVVuc2V0MQ4wDAYDVQQHEwVVbnNldDEOMAwGA1UEChMF

VW5zZXQxFDASBgNVBAMTC0tleXN0b25lIENBAgECMAsGCWCGSAFlAwQCATANBgkq

hkiG9w0BAQEFAASCAQB2lxoXMw1l8DBRUxxD9iLep85XrJMnHspEE94GEUnZkaH1

FkGWUqdCVSXcsvVcuWcQKb9ZjMwGnBeTYNeqc66Fezy4Sg2HSbXjne7uk4giGIe+

7fOeGVl25q05zjUmwrUzZRKv4vpLaQxZctxSGSRXNrDdmRcrC02YFVm6/Mghpvtx

4SjWk5EWrq+/ZxQ2sFxjYVSREF6YjpPf+PcS6Hh9ieBpUH2GGm+kr4/KdIyyrHlm

SRItYmsxE3fF1n2N23bQQULkdNotRzj8fIYJLEno7XWqvMxqQKUTyQBTlfVTfpo5

kumS9+5On4Gx2vc2ZLQEjQT0lEOxyaV4ze3+3Fc2

-----END CMS-----

) verify
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:867

2010-08-28 04:04:25.749 16927 DEBUG keystonemiddleware.authtoken [-]
_signing
certfilename(/tmp/keystone-signing-TASGp3/signingcert.pem)
verify
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py:868

2010-08-28 04:04:25.749 16927 DEBUG keystonemiddleware.authtoken [-]
_signing
cafilename(/tmp/keystone-signing-TASGp3/cacert.pem) verify
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:869

2010-08-28 04:04:25.755 16927 WARNING keystonemiddleware.auth_token [-]
Verify error: Command 'openssl' returned non-zero exit status 4

2010-08-28 04:04:25.755 16927 DEBUG keystonemiddleware.authtoken [-] Token
validation failure. _validate
usertoken
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py:687

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.auth_token Traceback
(most recent call last):

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py",
line 671, in validateuser_token

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken
verified = self.
verifypkiztoken(usertoken, tokenids)

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py",
line 905, in verifypkiz_token

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken if
self.
issignedtokenrevoked(tokenids):

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py",
line 845, in issignedtokenrevoked

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken if
self.
istokenidinrevokedlist(tokenid):

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py",
line 852, in istokenidinrevokedlist

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken
revocation
list = self.tokenrevocation_list

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py",
line 963, in tokenrevocation_list

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken
self.
tokenrevocationlist = self.fetchrevocation_list()

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py",
line 997, in fetchrevocation_list

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken
return self.
cmsverify(revocationlist_data)

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py",
line 888, in cmsverify

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.auth_token
return verify()

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth
token.py",
line 874, in verify

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.auth_token
inform=inform).decode('utf-8')

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.authtoken File
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystoneclient/common/cms.py",
line 178, in cms
verify

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.auth_token raise
e

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.auth_token
CalledProcessError: Command 'openssl' returned non-zero exit status 4

2010-08-28 04:04:25.755 16927 TRACE keystonemiddleware.auth_token

2010-08-28 04:04:25.757 16927 DEBUG keystonemiddleware.authtoken [-]
Marking token as unauthorized in cache store
invalid
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:1501

2010-08-28 04:04:25.757 16927 WARNING keystonemiddleware.auth_token [-]
Authorization failed for token

2010-08-28 04:04:25.757 16927 INFO keystonemiddleware.auth_token [-]
Invalid user token - deferring reject downstream
*Best Regards!*

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--------------


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Chao Yan,

On Thu, Aug 14, 2014 at 9:07 AM, 严超 yanchao727@gmail.com wrote:

Verification failure
3074320060 <3074320060>:error:2E099064:CMS
routines:CMSSIGNERINFOVERIFYCERT:certificate verify
error:cms
smime.c:304:Verify error:certificate is not yet valid

Usually when I've seen errors like this, it was caused by machines not
having their clocks in sync. If the CA generating the cert has its clock a
little bit in the future it will generate a cert that isn't valid yet

I'd suggest taking a look the cert, and checking the clocks on your
machines to make sure they're all correct - maybe one of them lost its
hardware RTC and ended up back in 1970 after a reboot, or something
peculiar like that.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Xen as a hypervisor

Stackers - Im planning to use Citrix Xenserver as a hypervisor in my environment (Icehouse) .
Any one implemented this successfully ?
Any good good documentation available for the same ? _______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 14/08/14 15:42, Mridhul Pax wrote:
Stackers - Im planning to use Citrix Xenserver as a hypervisor in my
environment (Icehouse) .

Any one implemented this successfully ?

It does work .... but :

Any good good documentation available for the same ?

Unfortunately not - the Xen documentation has not been updated for more
than two years :(

http://lists.openstack.org/pipermail/openstack-dev/2014-June/038092.html

Regards,

Tom

[Openstack] Information

Hi All,

I have an Icehouse three node installation on Ubuntu platform. I am trying to implement autoscaling using wordpress template.
I need a template which describes multiple neutron networks and for use with Ubuntu Cloud ready images .

Thanks
Kumar


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] nova-compute unable to start

Hi All,

i installed the 3 nodes OpenStack with Ubuntu 14.04.1, i followed the
document @ openstack official website, i installed everything, and it looks
fine. but i don't know why nova-compute is unable to startup at Compute
node. the log as below: please anyone can help with? thank you!
(p.s. few weeks, before, i can installed and working without problem, but
don't know why suddnely now it is not working, i am very sure, all steps
and necessary packages have been installed and tried to re-install many
times, but the result is the same.)

---- nova-compute.log ---
2014-08-14 15:09:10.474 13058 INFO nova.openstack.common.periodictask [-]
Skipping periodic task _periodic
updatedns because its interval is negative
2014-08-14 15:09:10.508 13058 INFO nova.virt.driver [-] Loading compute
driver 'libvirt.LibvirtDriver'
2014-08-14 15:09:10.532 13058 INFO oslo.messaging.
drivers.implrabbit [-]
Connected to AMQP server on controller:5672
2014-08-14 15:09:10.551 13058 INFO oslo.messaging.
drivers.implrabbit [-]
Connected to AMQP server on controller:5672
2014-08-14 15:09:10.613 13058 AUDIT nova.service [-] Starting compute node
(version 2014.1.2)
2014-08-14 15:10:10.678 13058 WARNING nova.virt.libvirt.driver
[req-68c41414-06de-4cc2-9981-4caf14320716 None None] Cannot update service
status on host: compute1,due to an unexpected exception.
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver Traceback
(most recent call last):
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2834,
in _set
hostenabled
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver service =
service
obj.Service.getbycomputehost(ctx, CONF.host)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 110, in
wrapper
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver args,
kwargs)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 425, in
object
classaction
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
objver=objver, args=args, kwargs=kwargs)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 150,
in call
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
wait
forreply=True, timeout=timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in
_send
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
timeout=timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/
drivers/amqpdriver.py",
line 412, in send
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver return
self.send(target, ctxt, message, waitforreply, timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/
drivers/amqpdriver.py",
line 403, in send
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver result =
self.
waiter.wait(msgid, timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/
drivers/amqpdriver.py",
line 267, in wait
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver reply,
ending = self.pollconnection(msgid, timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/
drivers/amqpdriver.py",
line 217, in pollconnection
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver % msgid)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
MessagingTimeout: Timed out waiting for a reply to message ID
b881acc7092543a0affe821f34a7b531
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
2014-08-14 15:10:10.683 13058 WARNING nova.virt.libvirt.driver [-] URI
qemu:///system does not support events: Cannot write data: Broken pipe
2014-08-14 15:10:10.687 13058 ERROR nova.openstack.common.threadgroup [-]
internal error: client socket is closed
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
Traceback (most recent call last):
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py",
line 117, in wait
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
x.wait()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py",
line 49, in wait
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
return self.thread.wait()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168,
in wait
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
return self.
exitevent.wait()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
return hubs.get
hub().switch()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in
switch
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
return self.greenlet.switch()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194,
in main
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
result = function(args, kwargs)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py",
line 483, in runservice
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
service.start()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 163, in start
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
self.manager.init
host()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1030,
in inithost
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
self.driver.init
host(host=self.host)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
658, in inithost
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
self.
doqualitywarnings()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
641, in doqualitywarnings
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
caps = self.get
hostcapabilities()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
2870, in get
hostcapabilities
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
xmlstr = self.
conn.getCapabilities()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in doit
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
result = proxycall(self.autowrap, f, *args, **kwargs)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in
proxy_call
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
rv = execute(f,*args,
kwargs)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in
tworker
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
rv = meth(
args,**kwargs)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 3300, in
getCapabilities
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
if ret is None: raise libvirtError ('virConnectGetCapabilities() failed',
conn=self)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
libvirtError: internal error: client socket is closed


libvirtd.log full log
-- libvirtd.log -
2014-08-14 07:09:40.656+0000: 5912: info : libvirt version: 1.2.2
2014-08-14 07:09:40.656+0000: 5912: warning : virKeepAliveTimerInternal:140
: No response from client 0x7f7a1d1796c0 after 5 keepalive messages in 30
seconds

Thx

Bill


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Bill,

Could you please check whether libvirt daemon works well?

On Thu, Aug 14, 2014 at 6:52 PM, Bill WONG wongahshuen@gmail.com wrote:

Hi All,

i installed the 3 nodes OpenStack with Ubuntu 14.04.1, i followed the
document @ openstack official website, i installed everything, and it looks
fine. but i don't know why nova-compute is unable to startup at Compute
node. the log as below: please anyone can help with? thank you!
(p.s. few weeks, before, i can installed and working without problem, but
don't know why suddnely now it is not working, i am very sure, all steps
and necessary packages have been installed and tried to re-install many
times, but the result is the same.)

---- nova-compute.log ---
2014-08-14 15:09:10.474 13058 INFO nova.openstack.common.periodictask [-]
Skipping periodic task _periodic
updatedns because its interval is negative
2014-08-14 15:09:10.508 13058 INFO nova.virt.driver [-] Loading compute
driver 'libvirt.LibvirtDriver'
2014-08-14 15:09:10.532 13058 INFO oslo.messaging.
drivers.implrabbit [-]
Connected to AMQP server on controller:5672
2014-08-14 15:09:10.551 13058 INFO oslo.messaging.
drivers.implrabbit [-]
Connected to AMQP server on controller:5672
2014-08-14 15:09:10.613 13058 AUDIT nova.service [-] Starting compute node
(version 2014.1.2)
2014-08-14 15:10:10.678 13058 WARNING nova.virt.libvirt.driver
[req-68c41414-06de-4cc2-9981-4caf14320716 None None] Cannot update service
status on host: compute1,due to an unexpected exception.
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver Traceback
(most recent call last):
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2834,
in _set
hostenabled
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver service =
service
obj.Service.getbycomputehost(ctx, CONF.host)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 110, in
wrapper
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver args,
kwargs)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 425, in
object
classaction
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
objver=objver, args=args, kwargs=kwargs)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 150,
in call
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
wait
forreply=True, timeout=timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in
_send
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
timeout=timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/
drivers/amqpdriver.py",
line 412, in send
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver return
self.send(target, ctxt, message, waitforreply, timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/
drivers/amqpdriver.py",
line 403, in send
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver result =
self.
waiter.wait(msgid, timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/
drivers/amqpdriver.py",
line 267, in wait
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver reply,
ending = self.pollconnection(msgid, timeout)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver File
"/usr/lib/python2.7/dist-packages/oslo/messaging/
drivers/amqpdriver.py",
line 217, in pollconnection
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver % msgid)
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
MessagingTimeout: Timed out waiting for a reply to message ID
b881acc7092543a0affe821f34a7b531
2014-08-14 15:10:10.678 13058 TRACE nova.virt.libvirt.driver
2014-08-14 15:10:10.683 13058 WARNING nova.virt.libvirt.driver [-] URI
qemu:///system does not support events: Cannot write data: Broken pipe
2014-08-14 15:10:10.687 13058 ERROR nova.openstack.common.threadgroup [-]
internal error: client socket is closed
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
Traceback (most recent call last):
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py",
line 117, in wait
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
x.wait()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py",
line 49, in wait
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
return self.thread.wait()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168,
in wait
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
return self.
exitevent.wait()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
return hubs.get
hub().switch()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in
switch
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
return self.greenlet.switch()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194,
in main
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
result = function(args, kwargs)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py",
line 483, in runservice
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
service.start()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 163, in start
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
self.manager.init
host()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1030,
in inithost
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
self.driver.init
host(host=self.host)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
658, in inithost
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
self.
doqualitywarnings()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
641, in doqualitywarnings
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
caps = self.get
hostcapabilities()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
2870, in get
hostcapabilities
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
xmlstr = self.
conn.getCapabilities()
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in doit
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
result = proxycall(self.autowrap, f, *args, **kwargs)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in
proxy_call
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
rv = execute(f,*args,
kwargs)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in
tworker
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
rv = meth(
args,**kwargs)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 3300, in
getCapabilities
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
if ret is None: raise libvirtError ('virConnectGetCapabilities() failed',
conn=self)
2014-08-14 15:10:10.687 13058 TRACE nova.openstack.common.threadgroup
libvirtError: internal error: client socket is closed


libvirtd.log full log
-- libvirtd.log -
2014-08-14 07:09:40.656+0000: 5912: info : libvirt version: 1.2.2
2014-08-14 07:09:40.656+0000: 5912: warning :
virKeepAliveTimerInternal:140 : No response from client 0x7f7a1d1796c0
after 5 keepalive messages in 30 seconds

Thx

Bill


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Tang Yaguang


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Autoscaling

Hi All,

Can anyone point me to right link to Fedora 20 Jeos image ? The following autoscaling template
https://raw.githubusercontent.com/openstack/heat-templates/master/cfn/F17/AutoScalingCeilometer.yaml
seems to be hardcoded for Loadbalancer to work only with Fedora 20 Jeos. Also I need an autoscaling
template which specifies multiple neutron networks along with Floating IP for the vms.

Thanks
Kumar


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 08/14/2014 12:58 PM, yalla.gnan.kumar@accenture.com wrote:
Can anyone point me to right link to Fedora 20 Jeos image ? The
following autoscaling template

http://fedoraproject.org/en/get-fedora#clouds

--
Christian Berendt
Cloud Computing Solution Architect
Mail: berendt@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

[Openstack] [Murano] Screencasts series: MuranoPL as a unified tool for integrating applications

Hi everyone!

Let me introduce a new video in a series of screencasts about Murano.
Today it will be about MuranoPL http://youtu.be/gIHgxSeG2Zg [1] -
platform-independent language with which you can describe deployment of any
application with a minimum of reusing code.

Examples with the existing applications can be found the git
https://github.com/murano-project/murano-app-incubator repository
https://github.com/murano-project/murano-app-incubator [2].

Waiting for your questions at #murano channel.

[1] - http://youtu.be/gIHgxSeG2Zg http://youtu.be/gIHgxSeG2Zg
[2] - https://github.com/murano-project/murano-app-incubator


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Connect VM network to exists VLAN network

Hi
I have to network:
vm network: 10.2.21.0/24
exist VLAN network: 192.168.1.0/24
I want to connect from my VM network to physic VLAN network.
How i config to connect?

Thanks :)


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Nhan,
I guess more information is required to help you out there.

It would be helpful to know which setup you're using, e.g. a singlenode
or multinode setup, openvswitch for network virtualiation or something
else? Are you using nova network or neutron networking?

If I got it right you created a tenant network in openstack (in your
case vm network). In addition your host (hypervisor) has a an eth
interface into the physical network, right?

You also mentioned that your physical network is a vlan network. Is your
hypervisor aware of this vlan tagging or is this done by a access port
config in your switch?

And what you want to achieve is to connect a vm attached to the
openstack vm network to your physical vlan network. Did I get you right?

Basically you would create a so called "provider network" that
represents your physical network and connect this provider network via a
virtual router to your vm network.

More information you can find here:
http://docs.openstack.org/admin-guide-cloud/content/under_the_hood_openvswitch.html#under_the_hood_openvswitch_scenario1

Regards,
Andreas

On Fri, 2014-08-15 at 00:00 +0700, Nhan Cao wrote:
Hi
I have to network:
vm network: 10.2.21.0/24
exist VLAN network: 192.168.1.0/24
I want to connect from my VM network to physic VLAN network.

How i config to connect?

Thanks :)


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] where is the RESTful api for nova in source code

Hi,List:
i added a simple function to nova ,and want to call it by RESTful api exists now.i wander where is the RESTful api for nova in source code and some tutorial for developer.
thanks a lot.

zhchaobeyond@gmail.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Fri, Aug 15, 2014 at 11:14 AM, zhchaobeyond@gmail.com <
zhchaobeyond@gmail.com> wrote:

Hi,List:
i added a simple function to nova ,and want to call it by RESTful api
exists now.i wander where is the RESTful api for nova in source code and
some tutorial for developer.
thanks a lot.


zhchaobeyond@gmail.com


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Have you checked the developer documentation? Especially "Adding a Method
to the OpenStack API".

http://docs.openstack.org/developer/nova/devref/index.html
http://docs.openstack.org/developer/nova/devref/addmethod.openstackapi.html

--
YY Inc. is hiring openstack and python developers. Interested? Check
http://soa.game.yy.com/jobs.html

--
Thanks,
Yuanle


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Heat, keystone trust, and horizon: why is the password box there?

I believe I have heat setup correctly using trust & domain model (on
icehouse / ubuntu 14.04), I followed

http://hardysteven.blogspot.ca/2014/04/heat-auth-model-updates-part-1-trusts.html
and
http://hardysteven.blogspot.ca/2014/04/heat-auth-model-updates-part-2-stack.html

and heat is working fine from the command line (when the environment
variables are set), and from horizon (when I enter my password a
second time on the 'launch stack' box).

But my understand was I would not have to supply my password a second
time. I can't leave the box blank (it says its required).

Am I misunderstanding this? I don't see code in horizon that would
'hide' this box.
If i'm not misunderstanding, is there a way to 'test' the setup of the
domain, of the trust?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 15/08/14 13:49, Don Waterloo wrote:
I believe I have heat setup correctly using trust & domain model (on
icehouse / ubuntu 14.04), I followed

http://hardysteven.blogspot.ca/2014/04/heat-auth-model-updates-part-1-trusts.html
and
http://hardysteven.blogspot.ca/2014/04/heat-auth-model-updates-part-2-stack.html

and heat is working fine from the command line (when the environment
variables are set), and from horizon (when I enter my password a
second time on the 'launch stack' box).

But my understand was I would not have to supply my password a second
time. I can't leave the box blank (it says its required).

Am I misunderstanding this? I don't see code in horizon that would
'hide' this box.

Hi,

It seems this change hasn't been propagated to Horizon yet, see
https://bugs.launchpad.net/horizon/+bug/1290344 .

Hope this helps,

Julie

If i'm not misunderstanding, is there a way to 'test' the setup of the
domain, of the trust?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] SWIFT AND FILE/CLOUD GATEWAY

Has anyone constructed their own high-performance/enterprise-ready
file/cloud gateway? If so what software components were used ? I am
specifically looking for a gateway which supports CIFS/NFS. At first
glance it seems the only immediate options are to share, via samba/nfs, a
fuse mounted swift 'file system'

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Has anyone constructed their own high-performance/enterprise-ready file/cloud gateway? If so what
software components were used ? I am specifically looking for a gateway which supports CIFS/NFS. At

What do you mean by cloud? What are you intending do with it?
CIFS and NFS support are usually more in the "realms" of SANs / fileserves.

first glance it seems the only immediate options are to share, via samba/nfs, a fuse mounted swift 'file
system

You really do not want to run a filesystem where you expect any kind of performance wrapped over a http based protocol ;)
There are some tools where you can "mount" a swift store as a directory on your machine so less savvy users can use it to copy some files.
( e.g. http://www.mirantis.com/openstack-portal/the-comparison-of-openstack-swift-compatible-clients/ )

Cheers,
Robert van Leeuwen

[Openstack] [OSSA 2014-026] Multiple vulnerabilities in Keystone revocation events (CVE-2014-5251, CVE-2014-5252, CVE-2014-5253)

OpenStack Security Advisory: 2014-026
CVE: CVE-2014-5251, CVE-2014-5252, CVE-2014-5253
Date: August 15, 2014
Title: Multiple vulnerabilities in Keystone revocation events
Reporter: Lance Bragstad (Rackspace) - CVE-2014-5252
Brant Knudson (IBM) - CVE-2014-5251, CVE-2014-5253
Products: Keystone
Versions: 2014.1 versions up to 2014.1.1

Description:
Lance Bragstad from Rackspace and Brant Knudson from IBM reported 3
vulnerabilities in Keystone revocation events. Lance Bragstad discovered
that UUID v2 tokens processed by the V3 API are incorrectly updated and
get their "issued_at" time regenerated (CVE-2014-5252). Brant Knudson
discovered that the MySQL token driver stores expiration dates
incorrectly which prevents manual revocation (CVE-2014-5251) and that
domain-scoped tokens don't get revoked when the domain is disabled
(CVE-2014-5253). Tokens impacted by one of these bugs may allow a user
to evade token revocation. Only Keystone setups configured to use
revocation events are affected.

Juno (development branch) fix:
https://review.openstack.org/111106
https://review.openstack.org/109747
https://review.openstack.org/109819
https://review.openstack.org/109820

Icehouse fix:
https://review.openstack.org/112087
https://review.openstack.org/111772
https://review.openstack.org/112083
https://review.openstack.org/112084

Notes:
These fixes will be included in the Juno-3 development milestone and are
already included in the 2014.1.2.1 release.

References:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-5251
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-5252
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-5253
https://launchpad.net/bugs/1347961
https://launchpad.net/bugs/1348820
https://launchpad.net/bugs/1349597

--
Tristan Cacqueray
OpenStack Vulnerability Management Team


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Swift] Does anyone deploy Swift with non-Eventlet WSGI servers?

As far as I know, most Swift deployments use eventlet.wsgi as their WSGI
server. Are there any out there that use Apache/mod_wsgi or anything else?

I ask because I'm looking at making better use of the 100 Continue
response inside Swift (proxy ---> object server requests, nothing
client-facing) to facilitate the development of erasure-code support.
Specifically, I'm looking at adding a header to the 100 Continue response.

The reason this affects Apache/mod_wsgi folks is that WSGI doesn't give
you any access to the 100 Continue response. PEP 3333 declares that a
WSGI server must support sending a 100 Continue response, but it's
handled transparently by the WSGI server and not the application. Thus,
to get access to that stuff, I need to do things that go beyond WSGI. I
have an idea how to make this happen for eventlet, but I can't fix every
single WSGI server out there.

So, Swift operators: what WSGI server are you using for your account,
container, and object servers?

Also, if erasure-code support required you to switch your Swift account,
container, and object servers to eventlet.wsgi, would you switch or
would you not use erasure codes?

(Note that this is just the internal HTTP servers, not the Swift proxy
server. Nothing here would change the client <--> Swift protocol at all.)


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Why ceilometer use mongo?

Hello,

I am wondering why Ceilometer uses MongoDB instead MySQL or some relational
database?

What are the advantages of use MongoDB in this specific case?

Thanks.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

From what I have seen and heard MySQL simply won't be able to handle it speed wise.
You might be able to get a very, very small cluster to run on MySQL.
This will be less of an issue with Mongo since writes are faster with Mongo.

I wonder if anyone runs a cluster of some size with ceilometer though. (And what they use)
During a poll at the operators meetup there was no one running it properly in production due to performance issues..

We decided we only wanted to have some graphs of instances and do not need other functionality so we build a plugin to put it in graphite: http://engineering.spilgames.com/using-ceilometer-graphite/

Cheers,
Robert van Leeuwen

Sent from my iPad

On 16 aug. 2014, at 02:14, "Guillermo Alvarado" guillermoalvarado89@gmail.com wrote:

Hello,

I am wondering why Ceilometer uses MongoDB instead MySQL or some relational database?

What are the advantages of use MongoDB in this specific case?

Thanks.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Libvirt alternatives

Hello,

I can not find specific and concret information about the alternatives of
libvirt. I mean, If I dont want to use Libvirt which are my alternatives in
Openstack/nova?

Thanks a lot!

~GA


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Maybe this will help you:
https://wiki.openstack.org/wiki/HypervisorSupportMatrix

I would look closely at the groups listed under Driver Testing Status
before making choices regarding anything serious.

-Chris

On Fri, Aug 15, 2014 at 8:02 PM, Guillermo Alvarado <
guillermoalvarado89@gmail.com> wrote:

Hello,

I can not find specific and concret information about the alternatives of
libvirt. I mean, If I dont want to use Libvirt which are my alternatives in
Openstack/nova?

Thanks a lot!

~GA


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [IceHouse][Trusty] iptables rules disappeared from within Tenant Namespace (qdhcp-XXX), all tenants affected! Metadata not working anymore...

Guys,

Today I'm facing a old/new problem... Metadata doesn't work anymore
(again)... From an already running instance, I'm seeing:


ubuntu@linux-builder-1:~$ curl http://169.254.169.254/
curl: (7) Failed to connect to 169.254.169.254 port 80: Connection refused

Then, I looked at my Neutron Node, at this tenant Namespace, there are no
iptables rules there, look:


root@neutron-node-1:/var/log/neutron# ip netns exec
qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795 iptables -L -nv -t nat
Chain PREROUTING (policy ACCEPT 4 packets, 776 bytes)
pkts bytes target prot opt in out source
destination

Chain INPUT (policy ACCEPT 4 packets, 776 bytes)
pkts bytes target prot opt in out source
destination

Chain OUTPUT (policy ACCEPT 1 packets, 328 bytes)
pkts bytes target prot opt in out source
destination

Chain POSTROUTING (policy ACCEPT 1 packets, 328 bytes)
pkts bytes target prot opt in out source
destination

root@neutron-node-1:/var/log/neutron# ip netns exec
qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795 ip -4 r
default via 10.192.0.1 dev tap216bf57d-e8
10.192.0.0/20 dev tap216bf57d-e8 proto kernel scope link src 10.192.0.3
169.254.0.0/16 dev tap216bf57d-e8 proto kernel scope link src
169.254.169.254


I can see that "curl request" within the Namespace, with tcpdump:

root@neutron-node-1:~# ip netns exec
qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795 tcpdump -ni tap216bf57d-e8

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap216bf57d-e8, link-type EN10MB (Ethernet), capture size
65535 bytes
21:01:29.582960 IP 10.192.0.90.55635 > 169.254.169.254.80: Flags [S], seq
2521313833, win 29200, options [mss 1460,sackOK,TS val 85649833 ecr
0,nop,wscale 7], length 0
21:01:29.583140 IP 169.254.169.254.80 > 10.192.0.90.55635: Flags [R.], seq
0, ack 2521313834, win 0, length 0


If I'm not wrong, there was some iptables NAT rules there, to redirect the
Metadata traffic to the Nova API (controller-node-1) at TCP port 8775,
right?!

I'm using VLAN Provider Networks (No L3 Router), my Instances have a
route to the 169.254.169.254 IP, via their Namespace IP (10.192.0.3), look:


ubuntu@linux-builder-1:~$ ip r
default via 10.192.0.1 dev eth0
10.192.0.0/20 dev eth0 proto kernel scope link src 10.192.0.90
169.254.169.254 via 10.192.0.3 dev eth0

ubuntu@linux-builder-1:~$ ping -c 1 10.192.0.3
PING 10.192.0.3 (10.192.0.3) 56(84) bytes of data.
64 bytes from 10.192.0.3: icmp_seq=1 ttl=64 time=4.55 ms


It crashed today, it was okay yesterday... This time, I did nothing wrong
(I think)... :-(

BTW, I just upgraded all nodes (apt-get update / dist-upgrade), still
doesn't work... Proposed repo enabled.

I really appreciate any help!

Tks!
Thiago


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Guys,

I'm seeing the following error on Neutron Node:

--
2014-08-15 20:36:59.596 12685 ERROR neutron.agent.dhcp_agent
[req-17198f16-4149-47f8-8647-0b381df7d888 None] Unable to enable dhcp for
f0076840-43f3-4b2e-aa15-d6b2422e3795.
--

I'm getting there... :-)

On 15 August 2014 21:10, Martinx - ジェームズ thiagocmartinsc@gmail.com wrote:

Guys,

Today I'm facing a old/new problem... Metadata doesn't work anymore
(again)... From an already running instance, I'm seeing:


ubuntu@linux-builder-1:~$ curl http://169.254.169.254/
curl: (7) Failed to connect to 169.254.169.254 port 80: Connection refused

Then, I looked at my Neutron Node, at this tenant Namespace, there are no
iptables rules there, look:


root@neutron-node-1:/var/log/neutron# ip netns exec
qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795 iptables -L -nv -t nat
Chain PREROUTING (policy ACCEPT 4 packets, 776 bytes)
pkts bytes target prot opt in out source
destination

Chain INPUT (policy ACCEPT 4 packets, 776 bytes)
pkts bytes target prot opt in out source
destination

Chain OUTPUT (policy ACCEPT 1 packets, 328 bytes)
pkts bytes target prot opt in out source
destination

Chain POSTROUTING (policy ACCEPT 1 packets, 328 bytes)
pkts bytes target prot opt in out source
destination

root@neutron-node-1:/var/log/neutron# ip netns exec
qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795 ip -4 r
default via 10.192.0.1 dev tap216bf57d-e8
10.192.0.0/20 dev tap216bf57d-e8 proto kernel scope link src
10.192.0.3
169.254.0.0/16 dev tap216bf57d-e8 proto kernel scope link src
169.254.169.254


I can see that "curl request" within the Namespace, with tcpdump:

root@neutron-node-1:~# ip netns exec
qdhcp-f0076840-43f3-4b2e-aa15-d6b2422e3795 tcpdump -ni tap216bf57d-e8

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap216bf57d-e8, link-type EN10MB (Ethernet), capture size
65535 bytes
21:01:29.582960 IP 10.192.0.90.55635 > 169.254.169.254.80: Flags [S], seq
2521313833, win 29200, options [mss 1460,sackOK,TS val 85649833 ecr
0,nop,wscale 7], length 0
21:01:29.583140 IP 169.254.169.254.80 > 10.192.0.90.55635: Flags [R.], seq
0, ack 2521313834, win 0, length 0


If I'm not wrong, there was some iptables NAT rules there, to redirect the
Metadata traffic to the Nova API (controller-node-1) at TCP port 8775,
right?!

I'm using VLAN Provider Networks (No L3 Router), my Instances have a
route to the 169.254.169.254 IP, via their Namespace IP (10.192.0.3), look:


ubuntu@linux-builder-1:~$ ip r
default via 10.192.0.1 dev eth0
10.192.0.0/20 dev eth0 proto kernel scope link src 10.192.0.90
169.254.169.254 via 10.192.0.3 dev eth0

ubuntu@linux-builder-1:~$ ping -c 1 10.192.0.3
PING 10.192.0.3 (10.192.0.3) 56(84) bytes of data.
64 bytes from 10.192.0.3: icmp_seq=1 ttl=64 time=4.55 ms


It crashed today, it was okay yesterday... This time, I did nothing wrong
(I think)... :-(

BTW, I just upgraded all nodes (apt-get update / dist-upgrade), still
doesn't work... Proposed repo enabled.

I really appreciate any help!

Tks!
Thiago


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] problem with nova retry to neutron under load

[icehouse on ubuntu 14.04]

in neutronclient/v20/client.py, in init, it sets self.retries = 0.
Later, the logic is max
attempts = self.retries + 1, so effectively 1 retry.

In my setup, when i do a mass delete (e.g. 40 heats stacks that each
have 5 instances w/ 10 networks), once in a while I will get an error
on delete, indicating neutron unavailable.

At first I thought it was https://review.openstack.org/#/c/89645/ but
that fix is in the codebase already.

It seems to me that there should be some better backoff/retry logic,
e.g. use the Ethernet algorithm of pick a random delay, wait that
amount, then double each time if there is a problem connecting.

Is anyone else seeing sporadic failures on delete when under load like this?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] How to mirror ubuntu cloud archive?

Hi stackers,

The ubuntu cloud archive hosts latest openstack packages for ubuntu LTS
releases.

https://wiki.ubuntu.com/ServerTeam/CloudArchive

Is there a rsync interface to mirror it?

rsync rsync://ubuntu-cloud.archive.canonical.com/
returns no modules.

--
YY Inc. is hiring openstack and python developers. Interested? Check
http://soa.game.yy.com/jobs.html

--
Thanks,
Yuanle


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

apt-mirror should be able to mirror it

On 08/18/2014 01:05 AM, sylecn wrote:
Hi stackers,

The ubuntu cloud archive hosts latest openstack packages for ubuntu LTS
releases.

https://wiki.ubuntu.com/ServerTeam/CloudArchive

Is there a rsync interface to mirror it?

rsync rsync://ubuntu-cloud.archive.canonical.com/

returns no modules.

--
/YY Inc. is hiring/ openstack and python developers. Interested? Check
http://soa.game.yy.com/jobs.html

--
Thanks,
Yuanle


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

[Openstack] Hot and Cold migrations between zones or regions

Hi,

I have been searching for some time and nothing conclusive found yet so posting my question to lists.

I understand that OpenStack can be split into availability zones and then regions so for example sake a zone is different data-hall's or racks within the same data centre and a region is another country or data centre.

Is it possible to live migrate a virtual machine from one zone to another?

Same question I suppose could apply to a region but I understand networking could be an issue if the region is a different network?

Thanks

Pieter


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

It depends upon which hypervisor you do use. For e.g. if you are ESX then
they have concrete solution to migrate VMs (you can configure your data
center like that). But, it we are using KVM hypervisor then something is
cooking in direction and migration is not fully automated.

On Mon, Aug 18, 2014 at 2:55 PM, Pieter Koorts pieter.koorts@me.com wrote:

Hi,

I have been searching for some time and nothing conclusive found yet so
posting my question to lists.

I understand that OpenStack can be split into availability zones and then
regions so for example sake a zone is different data-hall's or racks within
the same data centre and a region is another country or data centre.

Is it possible to live migrate a virtual machine from one zone to another?

Same question I suppose could apply to a region but I understand
networking could be an issue if the region is a different network?

Thanks

Pieter


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] [Sahara] Can't login to node...

As a follow up, the question was answered there:
https://ask.openstack.org/en/question/43819/sahara-cant-login-to-node/

Thanks,

Dmitry

2014-07-29 6:37 GMT+04:00 Dat Tran dattbbk@gmail.com:

Hi everyone,

I have created a cluster hadoop. I can ssh and run the simpliest MapReduce
example in instances. But cluster alway just stays in "waiting" state? When
i check log, see:

2014-07-24 14:20:54.906 14857 DEBUG sahara.service.engine [-] Can't login to
node test-sahara-master-001 (192.168.50.168), reason
AuthenticationException: Authentication failed. waituntilaccessible
/home/bkcloud/sahara-venv/local/lib/python2.7/site-packages/sahara/service/engine.py:95
2014-07-24 14:20:54.946 14857 DEBUG sahara.utils.ssh
remote [-]
[test-sahara-worker-002] executecommand took 1.2 seconds to complete
logcommand
/home/bkcloud/sahara-venv/local/lib/python2.7/site-packages/sahara/utils/sshremote.py:407
2014-07-24 14:20:54.947 14857 DEBUG sahara.service.engine [-] Can't login to
node test-sahara-worker-002 (192.168.50.170), reason
AuthenticationException: Authentication failed. _wait
until_accessible
/home/bkcloud/sahara-venv/local/lib/python2.7/site-packages/sahara/service/engine.py:95

This is my sahara.conf:

[DEFAULT]
osauthhost=127.0.0.1
osauthport=35357
osadminusername=admin
osadminpassword=$pass
osadmintenant_name=admin

usefloatingips=True
useneutron=True
use
namespaces=False
logdir=/var/log/sahara
log
file=sahara.log
[database]
connection=sqlite:////tmp/sahara.db
[keystoneauthtoken]
auth
uri=http://127.0.0.1:5000/v2.0/
identity_uri=http://127.0.0.1:35357/

And add
/usr/share/openstack-dashboard/openstackdashboard/local/localsettings.py:

SAHARAUSENEUTRON = True
AUTOASSIGNMENTENABLED = False

I use plugin: http://docs.openstack.org/developer/s...

What is the problem here? Thank you very much!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Swift] IP address of Swift nodes : need help

Hi,

We are deploying Swift clusters where we do have IP addresses assigned to
Swift nodes via DHCP. Curious to know, what will happen if IP address of a
give node change that node is made part of Swift cluster. For e.g. let us
assume that Swift object node got IP 192.168.10.2 and later it changes to
192.168.10.9 because of DHCP. Will running Swift cluster get affected?

In other words, is it necessary to use static IP for Swift nodes?

Regards,
Jyoti Ranjan


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

If the IP for a storage node changes, you'll need to update the rings where that server's drives are. You can update the IP with the swift-ring-builder set_info ... command and then use "write_ring" to serialize it. Doing this will not cause any data movement in the cluster. Removing the server and re-adding it to the ring will cause data movement.

So, no, it's not strictly necessary to use static IPs. You'll be saving yourself some management overhead if you do, though.

--John

On Aug 18, 2014, at 11:42 AM, Jyoti Ranjan jranjan@gmail.com wrote:

Hi,

We are deploying Swift clusters where we do have IP addresses assigned to Swift nodes via DHCP. Curious to know, what will happen if IP address of a give node change that node is made part of Swift cluster. For e.g. let us assume that Swift object node got IP 192.168.10.2 and later it changes to 192.168.10.9 because of DHCP. Will running Swift cluster get affected?

In other words, is it necessary to use static IP for Swift nodes?

Regards,
Jyoti Ranjan


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Live migration implementation

All,

Aside from the shared storage and block migration options, how is live
migration implemented in Nova? I've grepped in all the usual places
and "Configuring Migrations" seems to imply that, by default, Nova does
pure stop-and-copy.

It also says Nova can be configured to use libvirt's live migration
but this might not complete if pages get dirtied faster than they can
be migrated. This implies that libvirt uses pre-copy but doesn't have
any stopping conditions to prevent non-completion.

Is all this correct?

Are there any plans to support pre-copy or post-copy?

Thanks,

--Craig

Dr. Craig A. Lee, lee@aero.org
Senior Scientist, Computer Systems Research Dept.
The Aerospace Corporation, M1-102
2310 East El Segundo Blvd.
El Segundo, CA 90245 USA
voice: 310-336-1381
fax: 310-336-0613
http://www.aero.org

The Aerospace Corporation operates a non-profit,
federally funded research and development corporation.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Why Horizon use Django instead Flask or other?

Hello,

I am wondering why Horizon is built in Django instead other Python
Framework Like Pylons, Flask, Zope or oher ?

Thanks for your time!
~GA


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

At the point at which the Horizon project first began (during the Cactus, I believe) the handful of folks working on it wanted a Python web framework that would get them up and running as fast as possible. That meant lots of built-in, “batteries-included” features so they didn’t need to reinvent a bunch of wheels; and that also meant something they had at least passing familiarity with. So they chose Django.

Speaking from personal opinion at this point (and as a former Horizon PTL and Django core developer): Flask is good for writing APIs or very simple sites, but not for large webapps. They would have had to write a ton of code for things other frameworks have built-in. And Pylons and Zope don’t have nearly the base of community, resources, or documentation that Django does. Twisted would have been another option, but the event-driven model of Twisted is hard for many people to wrap their heads around and wasn’t appropriate at the time.

All things considered I think the choice of Django was relatively sound (I had nothing to do with the original choice). It’s allowed the project to grow and scale effectively over the last 3+ years.

At this point, though, there’s a clear demand for a more responsive and real-time dashboard experience, and that generally means moving as much as possible into a client-side JavaScript-based webapp. That’s why the Horizon team has been making efforts to move towards things like angular.js and socket.io to help bring the next era of modern Horizon functionality. It’s not a simple or easy transition, but as that transition takes place a vast amount of the Django code will likely go away.

I hope that helps shed some light on the historical reason for “why Django” as well as why it may matter less that it was Django and not something else over time.

All the best,

  • Gabriel

From: Guillermo Alvarado [mailto:guillermoalvarado89@gmail.com]
Sent: Monday, August 18, 2014 4:20 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Why Horizon use Django instead Flask or other?

Hello,

I am wondering why Horizon is built in Django instead other Python Framework Like Pylons, Flask, Zope or oher ?

Thanks for your time!
~GA


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Routed management network

Hello,

Is it possible to have a routed management network, especially in terms of
the neutron dhcp server and metadata agent?

Thanks in advance!

Greetings

Chris


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] visa invitation letter for Paris summit

Hi guys,

I'm not sure this is the appropriate place to ask questions about the visa invitation letter for Paris summit. I followed https://www.openstack.org/summit/openstack-paris-summit-2014/visa-information/ to fill the form to request a visa invitation letter more than 1 week ago, but haven't got any reply yet. I've asked the events@openstack.org also, but still got no response. I'm not sure how long it will take to get the invitation letter there. Does any know any information about that? Thanks!

Best Regards,
-Lianhao


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 19/08/14 11:19, Lu, Lianhao wrote:
Hi guys,

I'm not sure this is the appropriate place to ask questions about the visa invitation letter for Paris summit. I followed https://www.openstack.org/summit/openstack-paris-summit-2014/visa-information/ to fill the form to request a visa invitation letter more than 1 week ago, but haven't got any reply yet. I've asked the events@openstack.org also, but still got no response. I'm not sure how long it will take to get the invitation letter there. Does any know any information about that? Thanks!

Best Regards,
-Lianhao

Hi,

events@openstack.org is the correct place for this. It's OK to reply to
your email asking for an update on the status of your request - the
people listening to the address are all friendly people, just a little
busy at the moment :) Sorry for the delayed reply!

Normally visa invitations are done in batches, rather than 1-by-1, so
you are probably just waiting for the next batch to be done. I'd suggest
replying to your email to events@openstack.org asking for timing
information :)

Regards,

Tom

[Openstack] [heat][docker] How to mange multi nodes by openstack-heat-docker plugin?

Hi all,
I want to know heat+docker plugin, how it manages multi nodes which have docker deployed?
For example, if I have a template for docker instance, and I have two docker nodes 'dockerA' and 'dockerB'(like two hypersiors),
I want to know which node will be choosed to boot my docker instance?

Thanks in advance!

2014-08-19 15:40 (UTC+8)
Wangpan_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 19/08/14 04:00, Wangpan wrote:
Hi all,
I want to know heat+docker plugin, how it manages multi nodes which have docker deployed?
For example, if I have a template for docker instance, and I have two docker nodes 'dockerA' and 'dockerB'(like two hypersiors),
I want to know which node will be choosed to boot my docker instance?

This was asked recently on openstack-dev as well; here is the answer
from Eric:

http://lists.openstack.org/pipermail/openstack-dev/2014-August/043154.html

Thanks in advance!

2014-08-19 15:40 (UTC+8)
Wangpan


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Havana] Can config-drive pass information to guest VM post launch?

Hi All

I want to know if there is a way in Ooenstack by which we can pass
additional configuration to guest VM after VM boots up and is already
running?

Config-drive, user-data and metadata all seems to pass information at VM
boot up but not post boot.

Is there a way?

Regards,
Saurabh


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Determining OpenStack Supported API Features

Hi Everyone,

I'm interested in any information around determining the supported feature
set for individual OpenStack installations. Our product works with multiple
cloud providers, and with the great stability and usability improvements in
Havana and Icehouse, we've seen a lot of the installations we work with
migrate forward. Today we see grizzly being phased out, Havana seeing
pretty good use, and Icehouse being adopted. The improvements in Juno,
Kilo and beyond seem likely to continue to push this trend, but it also
seems likely we will see a lot of installations stick with what works.

Just between Grizzly and Icehouse we have some pretty significant variances
in how we need to work with the API. The progression of Neutron as a
replacement/alternative for Nova networking and the introduction of
Projects are two areas that require a lot more than just calling a
different API endpoint.

Up to this point, we find ourselves individually picking out support
features, but as the feature matrix grows and cross-dependencies develop,
this seems likely to become overwhelming. From our perspective it will be
critical to define a restricted set of minimally-supported-feature-sets
(i.e. "Icehouse with Neutron networking" vs "Icehouse with Nova
networking"), and it would go a long way if OpenStack itself maintained at
least a list of such recommended "baselines" and ideally exposed the
available baseline through the API in a more direct manner than querying
for each individual feature.

Is there any existing work/plans/thoughts in this area?

Is this something that would be seen as valuable to usability or a
hindrance to experimentation and development?

--
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [OpenStack][Horizon] Editing Horizon Functions

Hello,

I need to modify a given function in Horizon, is it possible? I'd like to
change some
modifications that are done in Neutron database tables using the dashboard
functions.

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Seeking external network setup advice

Hi list,

I've been trying to set up an external network for a few days and this driving me crazy.  I have a topology close to the one described in the ubuntu 14.04 documentation for icehouse (using networking not legacy)

My setup is completely running under vitrualbox as this is just a POC at the moment, I have checked that my neutron node is allowed to put its network interface in promiscuous mode which seem to be the case but none of the interface on my neutron node actualy get put in promisc mode.

My external network is 192.168.199.0/24 (I don't have a public network available) and I have an external router providing gateway on 192.168.199.1.  my pool is defined from .101 to .150
However I cannot ping .101 from the external router. I have tried adding an extra network interface to the neutron node plugged in on the same network as the external router (.45) and from that one I can ping the router just fine.

My questions are :
Is it normal that none of the neutron node interface are in promisc mode ?

oot@os-network:~# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
br-ex      1500 0        11      0      0 0             8      0      0      0 BRU
br-int     1500 0        22      0      0 0             8      0      0      0 BRU
br-tun     1500 0         0      0      0 0             8      0      0      0 BRU
eth0       1500 0     12199      0      0 0          3959      0      0      0 BMRU
eth1       1500 0      1338      0      0 0          1940      0      0      0 BMRU
eth2       1500 0         2      0      0 0            18      0      0      0 BMRU
eth3       1500 0        15      0      0 0            17      0      0      0 BMRU
eth4       1500 0         9      0      0 0            14      0      0      0 BMRU
lo        65536 0        18      0      0 0            18      0      0      0 LRU

Do you have any poitner or direction I can follow to troubleshoot this ?

Thanks already for your help

Cheers,

Olivier

--

Olivier Cant, CEO | Gsm: +32(0)497/64.18.22
Exxoss, SPRL
Rue de la station, 2, 4347, Fexhe-le-haut-clocher | Telephone: +32(0)4/341.25.81 | Fax: +32(0)4/371.94.06

Hi Olivier,

I'm not running on virtual box - I have a 2 NIC setup using Neutron networking with 1 interface dedicated to "public" access and one interface split into tenant VLANs plus a 'service' vlan (so that compute and storage nodes exist entirely on this service vlan and not on the public network at all).

Here's my netstat, you can see that eth0 and eth1 are in promiscuous mode, but these were setup this way manually in my /etc/network/interfaces definitions:

auto eth0

iface eth0 inet manual

        pre-up ifconfig $IFACE up promisc

        post-down ifconfig $IFACE down

auto eth1

iface eth1 inet manual

        pre-up ifconfig $IFACE up promisc

        post-down ifconfig $IFACE down

Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg

br-backend  1500 0   8178788      0      0 0             8      0      0      0 BRU

br-int     1500 0     24796      0      0 0             8      0      0      0 BRU

br-os-service  1500 0  25600082      0      0 0         54114      0      0      0 BRU

br-public  1500 0  17868912      0      0 0      25942023      0      0      0 BRU

eth0       1500 0  120848297      0      0 0      93801692      0      0      0 BMPRU

eth1       1500 0  61922332      0      0 0      29232624      0      0      0 BMPRU

int-br-backend  1500 0  25954734      0      0 0      13805666      0      0      0 BMRU

lo        65536 0  253792580      0      0 0      253792580      0      0      0 LRU

You shouldn't run into this issue with virtualbox handling your eth separation, but should you try to make vlan-based interfaces on the host and use multiple vlan interfaces in different openvswitch bridges, openvswitch will silently fail to add all but one of the interfaces. It will actually look like it's added unless you dive into the logs or the flow tables. There are a couple extra bridge interfaces that I use here to work around the issue, but the simplest thing is to avoid using OS level interfaces for vlans.

Other than that, openvswitch and neutron are both pretty fickle at times. My best advice is to get in the habit of restarting openvswitch-switch, neutron-l3-agent and neutron-plugin-openvswitch-agent  any time you make changes.  Otherwise it can be pretty frustrating to spend 20 minutes tracing packets through ports and bridges just to find out one component or another never picked up a change.

-Andrew

On Tue, Aug 19, 2014 at 8:33 AM, Olivier Cant olivier.cant@exxoss.com wrote:

Hi list,

I've been trying to set up an external network for a few days and this driving me crazy.  I have a topology close to the one described in the ubuntu 14.04 documentation for icehouse (using networking not legacy)

My setup is completely running under vitrualbox as this is just a POC at the moment, I have checked that my neutron node is allowed to put its network interface in promiscuous mode which seem to be the case but none of the interface on my neutron node actualy get put in promisc mode.

My external network is 192.168.199.0/24 (I don't have a public network available) and I have an external router providing gateway on 192.168.199.1.  my pool is defined from .101 to .150
However I cannot ping .101 from the external router. I have tried adding an extra network interface to the neutron node plugged in on the same network as the external router (.45) and from that one I can ping the router just fine.

My questions are :
Is it normal that none of the neutron node interface are in promisc mode ?

oot@os-network:~# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
br-ex      1500 0        11      0      0 0             8      0      0      0 BRU
br-int     1500 0        22      0      0 0             8      0      0      0 BRU
br-tun     1500 0         0      0      0 0             8      0      0      0 BRU
eth0       1500 0     12199      0      0 0          3959      0      0      0 BMRU
eth1       1500 0      1338      0      0 0          1940      0      0      0 BMRU
eth2       1500 0         2      0      0 0            18      0      0      0 BMRU
eth3       1500 0        15      0      0 0            17      0      0      0 BMRU
eth4       1500 0         9      0      0 0            14      0      0      0 BMRU
lo        65536 0        18      0      0 0            18      0      0      0 LRU

Do you have any poitner or direction I can follow to troubleshoot this ?

Thanks already for your help

Cheers,

Olivier

--

Olivier Cant, CEO | Gsm: +32(0)497/64.18.22
Exxoss, SPRL
Rue de la station, 2, 4347, Fexhe-le-haut-clocher | Telephone: +32(0)4/341.25.81 | Fax: +32(0)4/371.94.06


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--

Andrew Mann

DivvyCloud Inc.

www.divvycloud.com

[Openstack] Working example of Nova "hooks"?

I'm looking for a working example of using "hooks" to extend Nova.

I have found this reference: http://docs.openstack.org/developer/nova/devref/hooks.html
and this one: http://docs.openstack.org/developer/nova/api/nova.hooks.html.

I have found several occurrences of the "@hooks.add_hook()" decorator in the Nova code, for example in nova/compute/api.py
this code where I should be able to add pre- and/or post-hooks for instance creation:

<snip>
@hooks.add_hook("create_instance")
def create(self, context, instance_type,
           image_href, kernel_id=None, ramdisk_id=None,
           min_count=None, max_count=None,
           display_name=None, display_description=None,
           key_name=None, key_data=None, security_group=None,
           availability_zone=None, user_data=None, metadata=None,
           injected_files=None, admin_password=None,
           block_device_mapping=None, access_ip_v4=None,
           access_ip_v6=None, requested_networks=None, config_drive=None,
           auto_disk_config=None, scheduler_hints=None, legacy_bdm=True):
<snip>

But no luck so far...

I tried writing a simple "create_instance" hook that would just make a log entry that it had been called:

First, the hook code itself, source file named openstackhookexample1.py:


from nova.openstack.common import log as logging
LOG = logging.getLogger(name)

class Example1HookClass(object):
def pre(self, *args, **kwargs):
LOG.warn(_("Example1HookClass pre called"))

def post(self, rv, *args, **kwargs):
    LOG.warn(_("Example1HookClass post called"))

And the setup.py to go with it:

from setuptools import setup

setup(
name='openstackhookexample1',
version='1.0',
description='Demonstration of OpenStack hooks, example #1',
pymodules=['openstackhookexample1'],
entry
points={
'nova.hooks': [
'createinstance': openstackhook_example1.hooks.Example1HookClass,
]
},
)


But when I try to install it ("python setup.py install") it fails with this error:
File "setup.py", line 16
'createinstance': openstackhook_example1.hooks.Example1HookClass,

So it looks as if the entry_point code fragment in the first-mentioned web site above has an error.

And finally, I have not found any documentation about how to configure OpenStack to use my
Hook after I manage to get it installed.

Thanks in advance,
Conrad


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] SWIFT AND HORIZON

Does Horizon support a Keystone + Swift only environment?

My Horizon instance can communicate with Keystone, however upon login,
Horizon is now complaining about a mis-configured compute service.

When I look at the Keystone service list, only Identity and Swift are
defined and supposedly Horizon only enables the service panels that have a
corresponding keystone service list entry.

There is another, older thread on this same topic, but the 'answer' isnt
that clear to me.
Something about manually disabling Horizon panels, etc.

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

While you can hack things around to make it work, the answer is that out-of-the-box it’s not supported. In the Horizon Quickstart guide it lists “Nova (compute, api, scheduler, and network), Glance, and Keystone” as the minimum required services. All others are optionally supported from there.

That said, support for a no-compute setup with Horizon has been a common request since the Essex days, and there’s absolutely no reason it couldn’t happen. It’s not even that hard to do; nobody’s filed the blueprints and done the work. Filing blueprints would be a good first step.

Keystone, however, will still be required. Some people have suggested that Horizon should support the old nova-auth and swift-auth mechanisms, but to me that just seems fractious. It sounds like using Keystone isn’t an issue for you though.

Hope that helps,

  • Gabriel

From: Brent Troge [mailto:brenttroge2016@gmail.com]
Sent: Tuesday, August 19, 2014 12:00 PM
To: openstack@lists.openstack.org
Subject: [Openstack] SWIFT AND HORIZON

Does Horizon support a Keystone + Swift only environment?
My Horizon instance can communicate with Keystone, however upon login, Horizon is now complaining about a mis-configured compute service.

When I look at the Keystone service list, only Identity and Swift are defined and supposedly Horizon only enables the service panels that have a corresponding keystone service list entry.
There is another, older thread on this same topic, but the 'answer' isnt that clear to me.
Something about manually disabling Horizon panels, etc.
Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [OSSA 2014-027] Persistent XSS in Horizon Host Aggregates interface (CVE-2014-3594)

OpenStack Security Advisory: 2014-027
CVE: CVE-2014-3594
Date: August 19, 2014
Title: Persistent XSS in Horizon Host Aggregates interface
Reporters: Dennis Felsch and Mario Heiderich (Ruhr-University Bochum)
Products: Horizon
Versions: up to 2013.2.3, and 2014.1 versions up to 2014.1.2

Description:
Dennis Felsch and Mario Heiderich from the Horst Görtz Institute for
IT-Security, Ruhr-University Bochum reported a persistent XSS in
Horizon. A malicious administrator may conduct a persistent XSS attack
by registering a malicious host aggregate in Horizon Host Aggregate
interface. Once executed in a legitimate context this attack may reveal
another admin token, potentially resulting in a lateral privilege
escalation. All Horizon setups are affected.

Juno (development branch) fix:
https://review.openstack.org/115310

Icehouse fix:
https://review.openstack.org/115311

Havana fix:
https://review.openstack.org/115313

Notes:
This fix will be included in the Juno-3 development milestone and in
future 2013.2.4 and 2014.1.3 releases.

References:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3594
https://launchpad.net/bugs/1349491

--
Tristan Cacqueray
OpenStack Vulnerability Management Team


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] SWIFT AND RING QUESTION

Excuse this question and for lack of basic understanding. I dropped from
school at 8th grade, so everything is basically self taught. Here goes.

I am trying to figure out where each offset/partition is placed on the ring.

So If I have 50 drives with a weight of 100 each I come up with the below
part power

part power = log2(50 * 100) = 13

Using that I then come up with the amount of partitions.

partitions = 2^13 = 8192

Now here is where my ignorance comes into play. How do I use these
datapoints to determine where each offset is on the ring?

I then guess that for each offset they will have a fixed range of values
that map to that partition.

So for example, for offset 1, all object URL md5 hashes that have a decimal
value of 0 through 100 will go here(i just made up the range 0 through 100,
i have no idea what the range would be with respect to my given part-power,
drive, etc).


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

https://swiftstack.com/blog/2012/11/21/how-the-ring-works-in-openstack-swift/ is soemthing that should be able to give you a pretty complete overview of how the ring works in Swift and how data placement works.

Let me know if you have more questions after you watch that video.

--John

On Aug 19, 2014, at 5:34 PM, Brent Troge brenttroge2016@gmail.com wrote:

Excuse this question and for lack of basic understanding. I dropped from school at 8th grade, so everything is basically self taught. Here goes.

I am trying to figure out where each offset/partition is placed on the ring.

So If I have 50 drives with a weight of 100 each I come up with the below part power

part power = log2(50 * 100) = 13

Using that I then come up with the amount of partitions.

partitions = 2^13 = 8192

Now here is where my ignorance comes into play. How do I use these datapoints to determine where each offset is on the ring?

I then guess that for each offset they will have a fixed range of values that map to that partition.

So for example, for offset 1, all object URL md5 hashes that have a decimal value of 0 through 100 will go here(i just made up the range 0 through 100, i have no idea what the range would be with respect to my given part-power, drive, etc).


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] UTS Capstone on SDN using OpenStack

Hi All,

Firstly my apologies if I am using this list inappropriately however I
believe it will be the best method for me to contact people within the
OpenStack space.

I am a 5th year Engineering ICT-Telecommunications student at UTS
about to begin my Capstone (Thesis/Major project) to complete my
degree.

I am looking into developing an SDN environment then going through and
testing various architectures and technologies utilizing SDN. From
here I will then conceptualize and potentially create future
applications using SDN.

At current I am talking with the university around acquiring a lab for
this project and thought that possibly OpenStack will have an
environment that I could potentially use to undertake this project?

If anyone is aware if this is true or not and or knows of other test
environments that I could potentially use that would be appreciated.

Regards,
David Butler


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi David,

Assuming UTS = uts.edu.au, you might already have access to an OpenStack
Cloud that was paid for by the Australian Government :)

Try and login to the NeCTAR Research Cloud at
https://dashboard.rc.nectar.org.au/ using yout UTS credentials.

Getting started guide is here:
http://support.rc.nectar.org.au/docs/getting-started

Regarding the actual SDN research, I would suggest you look into the
project we call 'Neutron'. Jump on Youtube
(https://www.youtube.com/user/OpenStackFoundation) or
http://www.slideshare.net/ to find a decent introduction.

Regards,

Tom

On 20/08/14 11:48, david butler wrote:
Hi All,

Firstly my apologies if I am using this list inappropriately however I
believe it will be the best method for me to contact people within the
OpenStack space.

I am a 5th year Engineering ICT-Telecommunications student at UTS
about to begin my Capstone (Thesis/Major project) to complete my
degree.

I am looking into developing an SDN environment then going through and
testing various architectures and technologies utilizing SDN. From
here I will then conceptualize and potentially create future
applications using SDN.

At current I am talking with the university around acquiring a lab for
this project and thought that possibly OpenStack will have an
environment that I could potentially use to undertake this project?

If anyone is aware if this is true or not and or knows of other test
environments that I could potentially use that would be appreciated.

Regards,
David Butler


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] New OpenStack Icehouse Installation Guide (Two-node architecture with legacy networking)

Hi all,

I want to share with you our new OpenStack Icehouse Installation Guide!

Unlike our previous guide, in which we considered a multi-node architecture
with Neutron, this manual details how to deploy OpenStack using a flat
networking model.

The new guide is availabe here:

https://github.com/ChaimaGhribi/Icehouse-Installation-Flat-Networking

Our installation guide for multi-node architecture with Neutron is availabe
here:

https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst

Hope this will be helpful !
Your questions and suggestions are welcome :)

Regards,

Chaima Ghribi


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 08/20/2014 03:17 PM, Chaima Ghribi wrote:
Hi all,

I want to share with you our new OpenStack Icehouse Installation Guide!

Unlike our previous guide, in which we considered a multi-node
architecture with Neutron, this manual details how to deploy OpenStack
using a flat networking model.

The new guide is availabe here:

https://github.com/ChaimaGhribi/Icehouse-Installation-Flat-Networking

Our installation guide for multi-node architecture with Neutron is
availabe here:

https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst

I'd like to invite you to improve the existing OpenStack documentation,
see http://docs.openstack.org for what exists and how to join.

Your contributions are welcome!

Andreas
--
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126

[Openstack] HA MySQL configuration

I need to make a HA Openstack configuration, so I need to take on the MySQL replication piece.  The docs I’ve found indicate you need to install the wsrep package for MySQL, then tie it all together with galera.  However, when I follow the docs at https://launchpadlibrarian.net/170877464/README-MySQL, I get the following errors (doing this on RHEL 6.5):

 

--> Finished Dependency Resolution

Error: Package: mysql-wsrep-5.1.53-0.7.6.x8664 (/mysql-wsrep-5.1.53-0.7.6-x8664)

           Requires: MySQL-client-community >= 5.1.47

Error: Package: mysql-wsrep-5.1.53-0.7.6.x8664 (/mysql-wsrep-5.1.53-0.7.6-x8664)

           Requires: MySQL-shared-community >= 5.1.47

 

I am totally unable to find these packages for anything newer than RHEL 5.

 

Any suggestions?

 

Dan O'Reilly

UNIX Systems Administration

9601 S. Meridian Blvd.

Englewood, CO 80112

720-514-6293

 

 

On 08/20/2014 01:01 PM, O'Reilly, Dan wrote:
I need to make a HA Openstack configuration, so I need to take on the
MySQL replication piece. The docs I’ve found indicate you need to
install the wsrep package for MySQL, then tie it all together with
galera. However, when I follow the docs at
https://launchpadlibrarian.net/170877464/README-MySQL, I get the
following errors (doing this on RHEL 6.5):

--> Finished Dependency Resolution

Error: Package: mysql-wsrep-5.1.53-0.7.6.x8664
(/mysql-wsrep-5.1.53-0.7.6-x86
64)

        Requires: MySQL-client-community >= 5.1.47

Error: Package: mysql-wsrep-5.1.53-0.7.6.x8664
(/mysql-wsrep-5.1.53-0.7.6-x86
64)

        Requires: MySQL-shared-community >= 5.1.47

I am totally unable to find these packages for anything newer than RHEL 5.

Any suggestions?

Hi Dan,

I highly recommend the Percona XtrDb Cluster packages and documentation:

http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html

Best,
-jay


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] How to download image to volume

Hi All,

I know cinder has a subcommand upload-to-image, which can upload volume
to image service, while I want to do the opposite--download image to
volume without recreating the volume. anyone knows how to do that?

Thanks


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi -

From the command line, you can supply an image id along with the cinder create command to create a volume from a image storage in glance.

eg, cinder create --image-id 25d00ddb-203a-4d50-93e3-103ab05bfbcd ......

Just be sure that your volume is big enough to hold the image!

For more options, just type 'cinder help create'.

--ryan


From: ZHOU TAO A tao.a.zhou@alcatel-lucent.com
Sent: Thursday, August 21, 2014 12:13 AM
To: openstack@lists.openstack.org
Subject: [Openstack] How to download image to volume

Hi All,

I know cinder has a subcommand upload-to-image, which can upload volume
to image service, while I want to do the opposite--download image to
volume without recreating the volume. anyone knows how to do that?

Thanks


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Ceilometer - 2014.1.1 issues

Hi guys,

I'm trying to get ceilometer to collect the information sent from the compute nodes and when the collector receives the message, it does a stack trace and the message is lost.  Has anybody encountered this issue?

I'm pretty sure the connection to mondogb is established at some point because if I do a typo in the username/password, it will give me a permissioned denied error message instead!

Here is the stack:
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp Traceback (most recent call last):
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/ceilometer/openstack/common/rpc/amqp.py", line 462, in processdata
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp **args)
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/ceilometer/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/ceilometer/collector.py", line 106, in recordmeteringdata
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp self.dispatchermanager.mapmethod('recordmeteringdata',
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp AttributeError: 'NamedExtensionManager' object has no attribute 'map_method'

Thank you very much,

Dave


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi guys,

 Finally, took me the night figuring out I had a < stevedore-0.8 > floating around in /usr/lib/python2.6/site-packages ...  I deleted the folder as it wasn't contained by any rpms and once I restarted the collector, the meter and resources were properly handled.

Dave

From: David Hill
Sent: 21-Aug-14 4:22 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Ceilometer - 2014.1.1 issues

Hi guys,

I'm trying to get ceilometer to collect the information sent from the compute nodes and when the collector receives the message, it does a stack trace and the message is lost.  Has anybody encountered this issue?

I'm pretty sure the connection to mondogb is established at some point because if I do a typo in the username/password, it will give me a permissioned denied error message instead!

Here is the stack:
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp Traceback (most recent call last):
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/ceilometer/openstack/common/rpc/amqp.py", line 462, in processdata
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp **args)
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/ceilometer/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/ceilometer/collector.py", line 106, in recordmeteringdata
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp self.dispatchermanager.mapmethod('recordmeteringdata',
2014-08-21 08:18:31.283 8309 TRACE ceilometer.openstack.common.rpc.amqp AttributeError: 'NamedExtensionManager' object has no attribute 'map_method'

Thank you very much,

Dave


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Ceilometer/Heat] correct format of webhook for alarm-action

Hi Folks,

For the "ceilometer alarm-threshold-create" cmd, what is the correct format of the webhook URL for alarm action, if I want to trigger a template in Heat to be executed once this alarm is notified? Are there any example of invoking heat template?


--alarm-action
URL to invoke when state transitions to alarm. May be used multiple times. Defaults to None.

I "google"ed for this, but most are related to Heat autoscaling and the alarm is defined in Heat template directly.

Thanks in advance,
Gary


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Thu, Aug 21, 2014 at 09:37:57AM +0000, Duan, Li-Gong (Gary@HPServers-Core-OE-PSC) wrote:
Hi Folks,

For the "ceilometer alarm-threshold-create" cmd, what is the correct
format of the webhook URL for alarm action, if I want to trigger a
template in Heat to be executed once this alarm is notified? Are there any
example of invoking heat template?

Can you explain what you mean by "invoking heat template" in more detail
please?

The URL is expected to be the pre-signed signal URL provided by some heat
resources, e.g the ScalingPolicy resources for AutoScaling.


--alarm-action

       URL to invoke when state transitions to alarm. May be used

multiple times. Defaults to None.


I "google"ed for this, but most are related to Heat autoscaling and the
alarm is defined in Heat template directly.

That is the primary use-case, if you want heat to spin up a new stack in
response to an alarm, the best way to do that is probably via
OS::Heat::AutoScalingGroup, which can scale out heat stacks, here's an
example:

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml#L65

This will scale out the "lb_server.yaml" template each time ceilometer
posts to the pre-signed URL provided by the OS::Heat::ScalingPolicy
resource:

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml#L118

However if what you actually want is for ceilometer to spin up a stack by
interacting with the heat API directly triggering the stack create, then I
think currently that is not possible, even if you were to create a
pre-signed stack create call for the heat-api-cfn.

This is because ceilometer inserts a pre-determined request body, which
will only work with the handle_signal methods of the resources which expect
to handle alarm signals from ceilometer:

https://github.com/openstack/heat/blob/master/heat/engine/resources/autoscaling.py#L1028

Steve

[Openstack] 'allowsamenet_traffic=True' seems to have no effect

Greetings,

brief

two instances X and Y are members of security group A. Despite the
following explicit setting in nova.conf:

allowsamenet_traffic=True

...the instances are only allowed to communicate according to the rules
defined in security group A.

detail

I first noticed this attempting to run iperf between two instances on the
same security network; they were unable to connect via the default TCP port
5001.

They were able to ping...looking at rules for the security group they are
are associated with, ping was allowed, so I then suspected the security
group rules were being applied to all communication, despite them being on
the same security group.

To test, I added rules to group A that allowed all communication, and
associated the rules with itself (i.e. security group A) and voila, they
could talk!

I then thought I had remembered incorrectly that by default all traffic is
allowed between instances on the same security group, so I double-checked
the documentation, but according to the documentation I had remembered
correctly:

allowsamenet_traffic = True (BoolOpt) Whether to allow network traffic
from same network

...I searched through my nova.conf files, but there was no
'allowsamenet_traffic' entry, so the default ought to be True, right?
Just to be sure, I explicitly added:

allowsamenet_traffic = True

to nova.conf and restarted nova services, but the security group rules are
still being applied to communication between instances that are associated
with the same security group.

I thought the 'default' security group might be a special case, so I tested
on another security group, but still get the same behaviour.

Is this a bug, or have I missed something here?

//Daniel


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Murano] Screencasts series: Composing Murano Application Package

Hi all!

I'd like to inform you, that new Murano screencast
http://youtu.be/_zGo-MtGu78 [1] has been released!
It's about application package structure and it will tell you how to
compose application package.

Try to create your own application and deploy it in openstack with one
click!

Waiting for your questions at #murano channel.

[1] - http://youtu.be/_zGo-MtGu78


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [OSSA 2014-028] Glance store DoS through disk space exhaustion (CVE-2014-5356)

OpenStack Security Advisory: 2014-028
CVE: CVE-2014-5356
Date: August 21, 2014
Title: Glance store DoS through disk space exhaustion
Reporter: Thomas Leaman (HP), Stuart McLaren (HP)
Products: Glance
Versions: up to 2013.2.3 and 2014.1 versions up to 2014.1.2

Description:
Thomas Leaman and Stuart McLaren from Hewlett Packard reported a
vulnerability in Glance. By uploading a large enough image to a Glance
store, an authenticated user may fill the store space because the
imagesizecap configuration option is not honored. This may prevent
further image upload and/or cause service disruption. Note that the
import method is not affected. All Glance setups using API v2 are
affected (unless you use a policy to restrict/disable image upload).

Juno (development branch) fix:
https://review.openstack.org/91764

Icehouse fix:
https://review.openstack.org/115280

Havana fix:
https://review.openstack.org/115289

Notes:
This fix will be included in the Juno-3 development milestone and in
future 2013.2.4 and 2014.1.3 releases.

References:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-5356
https://launchpad.net/bugs/1315321

--
Tristan Cacqueray
OpenStack Vulnerability Management Team


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Trove] trove list results in ERROR: Unauthorized (HTTP 401)

Dear all,

I run the stable branch of OpenStack Icehouse on Scientific Linux 6,
installed with Packstack, with Trove release 2014.2.b2 and
python-troveclient 1.0.5 (newest release/tag for both of them).

My problem: if as OpenStack user admin, that means, with sourced
keystonerc_admin, I run trove list or trove datastore-list I get

ERROR: Unauthorized (HTTP 401).

The same happens with user trove (see below).

The log files trove.log and keystone.log show (full excerpts below):

trove.log:

  • Unexpected response from keystone service: {u'error': {u'message': u"object of type 'NoneType' has no len()", u'code': 400, u'title': u'Bad Request'}}
  • ServiceError: invalid json response
  • Authorization failed for token

keystone.log:

  • TypeError: object of type 'NoneType' has no len()

I think it is a problem with the users and tenants I configured for
Trove. Someone having the same problem with Glance could fix it by
putting the right login information into glance.conf, see
https://answers.launchpad.net/glance/+question/229769 .

For configuration of Trove I followed the OpenStack documentation [1],
Troves documentation for manual install [2] and the DevStack code [3].

[1]
http://docs.openstack.org/icehouse/install-guide/install/yum/content/trove-install.html
[2] http://docs.openstack.org/developer/trove/dev/manual_install.html
[3] https://github.com/openstack-dev/devstack/blob/stable/icehouse/lib/trove

The three sources use different combinations of users and tenants. Does
someone of you know which users and tenants have to be used? At the
moment, I have the following configuration:

Users and tenants:
* tenant trove
* user trove is member and admin in tenant trove and services,
which is the service tenant in my installation of OpenStack
* user admin is member and admin in tenant trove, and for testing
even member and admin in services, but this didn't help

Trove's api-paste.ini:

[filter:authtoken]
adminuser=trove
admin
password=***
admintenantname=services

trove-taskmanager.conf, trove-conductor.conf and trove-guestagent.conf:

[DEFAULT]
novaproxyadminuser=admin
nova
proxyadminpass=***
novaproxyadmintenantname=trove

[1] uses tenant services here.

Any help and guidelines are appreciated.

Best regards,
Benjamin

trove.log

2014-08-21 14:55:27.122 6221 INFO eventlet.wsgi [-] (6221) accepted ('...', 45415)
2014-08-21 14:55:27.123 6221 DEBUG keystoneclient.middleware.authtoken [-] Authenticating user token call /usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py:666
2014-08-21 14:55:27.124 6221 DEBUG keystoneclient.middleware.authtoken [-] Removing headers from request environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role _removeauthheaders /usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py:725
2014-08-21 14:55:27.132 6221 WARNING keystoneclient.middleware.authtoken [-] Unexpected response from keystone service: {u'error': {u'message': u"object of type 'NoneType' has no len()", u'code': 400, u'title': u'Bad Request'}}
2014-08-21 14:55:27.132 6221 DEBUG keystoneclient.middleware.auth
token [-] Token validation failure. validateusertoken /usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py:943
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken Traceback (most recent call last):
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.auth
token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py", line 930, in _validateusertoken
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.auth
token verified = self.verifysignedtoken(usertoken, tokenids)
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py", line 1347, in verifysignedtoken
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken if self.issignedtokenrevoked(tokenids):
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.auth
token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py", line 1299, in issignedtokenrevoked
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken if self.istokenidinrevokedlist(tokenid):
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py", line 1306, in istokenidinrevokedlist
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken revocationlist = self.tokenrevocationlist
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py", line 1413, in tokenrevocationlist
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken self.tokenrevocationlist = self.fetchrevocationlist()
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.auth
token File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py", line 1446, in fetchrevocationlist
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.auth
token headers = {'X-Auth-Token': self.getadmintoken()}
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py", line 777, in getadmintoken
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken self.admintokenexpiry) = self.requestadmintoken()
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py", line 890, in requestadmintoken
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.auth
token raise ServiceError('invalid json response')
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.authtoken ServiceError: invalid json response
2014-08-21 14:55:27.132 6221 TRACE keystoneclient.middleware.auth
token
2014-08-21 14:55:27.134 6221 DEBUG keystoneclient.middleware.authtoken [-] Marking token as unauthorized in cache _cachestoreinvalid /usr/lib/python2.6/site-packages/keystoneclient/middleware/authtoken.py:1239
2014-08-21 14:55:27.134 6221 WARNING keystoneclient.middleware.authtoken [-] Authorization failed for token
2014-08-21 14:55:27.135 6221 INFO keystoneclient.middleware.auth
token [-] Invalid user token - rejecting request
2014-08-21 14:55:27.135 6221 INFO eventlet.wsgi [-] ... - - [21/Aug/2014 14:55:27] "GET /v1.0/b426bf07c5cd48d1b2525699bca29cdb/instances HTTP/1.1" 401 199 0.011922

keystone.log

2014-08-21 14:55:27.100 3165 INFO eventlet.wsgi.server [-] ... - - [21/Aug/2014 14:55:27] "POST /v2.0/tokens HTTP/1.1" 200 10323 0.213025
2014-08-21 14:55:27.129 3165 ERROR keystone.common.wsgi [-] object of type 'NoneType' has no len()
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi Traceback (most recent call last):
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/common/wsgi.py", line 207, in call
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi result = method(context, **params)
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/token/controllers.py", line 98, in authenticate
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi context, auth)
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/token/controllers.py", line 256, in authenticatelocal
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi if len(username) > CONF.maxparamsize:
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi TypeError: object of type 'NoneType' has no len()
2014-08-21 14:55:27.129 3165 TRACE keystone.common.wsgi
2014-08-21 14:55:27.131 3165 INFO eventlet.wsgi.server [-] ***.***.***.*** - - [21/Aug/2014 14:55:27] "POST /v2.0/tokens HTTP/1.1" 400 239 0.004268


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Zuul] Understanding dequeue-on-new-patchset

Hi-

I configured Zuul with paramater, 'dequeue-on-new-patchset: true'.

With this, for a change, if multiple patchsets are in queue, only the latest must be taken and rest all to be ignored.

But I noticed that every patchset is going to zuul-merger to Geraman and to Jenkins finally.

Is there any configuration in Zuul I need to take care of.

Kindly help me in this regard.

BR
--
Trinath Somanchi - B39208
trinath.somanchi@freescale.com | extn: 4048


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

"trinath.somanchi@freescale.com" trinath.somanchi@freescale.com writes:

Hi-

I configured Zuul with paramater, 'dequeue-on-new-patchset: true'.

With this, for a change, if multiple patchsets are in queue, only the latest must be taken and rest all to be ignored.

But I noticed that every patchset is going to zuul-merger to Geraman and to Jenkins finally.

Is there any configuration in Zuul I need to take care of.

Kindly help me in this regard.

If a change is in a pipeline with, say, patchset 1, and then someone
uploads patchset 2, the first should be removed from the pipeline and
then patchset 2 enqueued.

Strictly speaking, that will mean that every patchset will go through
the merger and Jenkins. But if testing for a patchset is in progress
when a new patchset is uploaded, the tests will abort.

If that's not what's happening, can you please provide your layout file
along with debug level log messages from startup as well as a time when
a change was erroneously not dequeued?

Thanks,

Jim

[Openstack] Backup of cloud applications in OpenStack

Was asked at work a few months back to look at backup for OpenStack.

To my mind, the end result needed is clear (had some time to think on the
subject), and what I see in OpenStack at present - or what is proposed - is
not what we need.

Wrote on the subject:
http://bannister.us/weblog/2014/08/21/cloud-application-backup-and-openstack/

Currently hacking at OpenStack to get the needed behavior. :)


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] legacy nova-network and auto scaling

Hi,
I have working ice house release with "nova-network" as the networking
service.
My question is,is there any dependency on auto scaling and neutron ie do i
need neutron for auto scaling.

Thanks


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Fri, Aug 22, 2014 at 01:50:52PM +0530, mad Engineer wrote:
Hi,
A A I have working ice house release with "nova-network" as the
networking service.
My question is,is there any dependency on auto scaling and neutron ie do i
need neutron for auto scaling.

I'm going to assume you're talking about Heat autoscaling here, in which
case the answer depends on which LoadBalancer resource you want to use.

The cloudformation-compatible LoadBalancer resource spins up a VM running
haproxy, with no dependency on neutron, but if you want to use the Neutron
LBaaS interfaces, and the OS::Neutron::LoadBalancer resource, you will need
Neutron ;)

Examples:
https://github.com/openstack/heat-templates/blob/master/cfn/F17/AutoScalingMultiAZSample.yaml

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml#L151

In summary, if you want to make use of the native as opposed to
cfn-compatible autoscaling functionality, it's probably best to use
Neutron.

Steve

[Openstack] can not receive from metadata.

Hi All,

I have 4 nodes openstack platform.
(controller, network, compute1, compute2)

When I start an instance with cirros on compute2(nova), I saw below failed logs;

cirros-ds 'net' up at 1.12
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 1.12. request failed
failed 2/20: up 3.31. request failed
failed 3/20: up 5.49. request failed
failed 4/20: up 7.68. request failed
failed 5/20: up 9.88. request failed
failed 6/20: up 12.07. request failed
failed 7/20: up 14.26. request failed
failed 8/20: up 16.46. request failed
failed 9/20: up 18.64. request failed
failed 10/20: up 20.83. request failed
failed 11/20: up 23.01. request failed
failed 12/20: up 25.20. request failed
failed 13/20: up 27.39. request failed
failed 14/20: up 29.58. request failed
failed 15/20: up 31.77. request failed
failed 16/20: up 33.95. request failed
failed 17/20: up 36.14. request failed
failed 18/20: up 38.33. request failed
failed 19/20: up 40.52. request failed
failed 20/20: up 42.71. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 44.90. searched: nocloud configdrive ec2
failed to get instance-id of datasource

Starting dropbear sshd: generating rsa key... generating dsa key... OK
=== system information ===
Platform: OpenStack Foundation OpenStack Nova
Container: none
Arch: x8664
CPU(s): 1 @ 2260.984 MHz
Cores/Sockets/Threads: 1/1/1
Virt-type: VT-x
RAM Size: 491MB
Disks:
NAME MAJ:MIN SIZE LABEL MOUNTPOINT
vda 253:0 1073741824
vda1 253:1 1061061120 cirros-rootfs /
=== sshd host keys ===
-----BEGIN SSH HOST KEY KEYS-----
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgwCg0x9q84oqW1Sjofs676KZCkmSQO3jCoql6znRH0uBevMAu1LOyrHEwllO5aMX0r/VEd4ZTPfb/38/b12HhYiwNwVFiwvuyQkYXR+09eY81RncZHKaq+V8Fbtowa9rp8jypTrj8MrEseZ+BlxpHNrra9gCzHkR2rj76PiwCS8l5YSr
root@cirros
ssh-dss AAAAB3NzaC1kc3MAAACBAOMHrDHv8BuhkLpV4Y668PAyKuNjRqEIunQcXGtQzUefvwQOc8BqN6i3UAc2xfdBviuU9RHg6pAfNX305FWxFhIawB/YSditKpq/J+AyPOdz5TxqtiUrbLqyk0Hf6jWpVp6sOgQqlgYDxaNpH7pfUKnq0I1JIQA2gYXokbFZMr1DAAAAFQC8g7KeSNaW8vNjfl3pK1I17MKN7QAAAIAB2ZCrsZyBFk2Jxs5v7pP45RiZqeuEWdCR8oGYtAT72wvDQ4AK600NRuKq+ZK8tnVQFYpsTMZyShCurh5tF1tQYlNIqK73FU67Tdb6Nu5ru1GN5DebQe0cxxMtCqqbZMoUsqWhncGO32JuS88PuTO4tiXriWUXY4NBwr7ImfEapAAAAIBHRcxhVB5QyUtdTlUlMqrNJbYbqF0I2SRoCR9dnyL7lqJrW5JF4Z+hP2c8kZnT7fNbQ0GNe//1j9Yvrw8UMqXjS/RU7Q1818e3ZikCotNdxkP5eVqflsXJ/+5FraG2Dnov832xV19E7zmtxFj5IviGi2FnepID8TWQMAvAu3I7IA==
root@cirros
-----END SSH HOST KEY KEYS-----
=== network info ===
if-info: lo,up,127.0.0.1,8,::1
if-info: eth0,up,192.168.2.84,24,fe80::f816:3eff:fee9:a249
ip-route:default via 192.168.2.254 dev eth0
ip-route:192.168.2.0/24 dev eth0 src 192.168.2.84
=== datasource: None None ===
=== cirros: current=0.3.2 uptime=45.19 ===
____ ____ ____
/ / __ ____ ____ / __ \/ __/
/ /
/ // // __// /_/ /\ \
_
//
/// // _//
http://cirros-cloud.net

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login:

I was able to access to cirros by ssh , but the instance did not have
a public ssh-key.
I guess the instance can not access to metadata service. However I did
not found a solution for this failed.
What Should I check? I would like to any advice.

My environment;
All system is ubuntu 14.04 LTS,
Openstack :1:2014.1.1-0ubuntu1

Regards,
TK


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Floating IP issues with multiple physical NICs, subnets

All,

We're trying to configure the following scenario - Compute nodes with
multiple physical NICs, each dedicated to a specific function/subnet:

Management/Private: 10.96.32.0/24
Storage: 10.96.48.0/24
External/Floating/DMZ: 10.96.16.0/24

We currently have two Nova Flat DHCP Networks (not using Neutron due to
lack of multi-host support) configured for both Storage and Management, and
are able to get appropriate connectivity in our VMs on each of these
subnets.

However, when we try to assign a floating IP to a VM in the External
subnet, we see problematic routing of packets. Packets reach the VM, the VM
responds, and then the response packets are often routed back out the
Management subnet. The behavior is inconsistent: some VMs can reliably
route packets back out the External NIC/subnet, and everything works; other
VMs consistently respond via the Management subnet; still others seem to
flip-flop between responding over the External and Management subnets.

When packets are sent over the incorrect NIC, our switches drop them, as we
do not allow routing between subnets.

How we ensure that outbound/response packets from a VM are routed over the
NIC that originally received the request packets in the first place?
Connection Tracking is specified in our IP Tables rules on the Compute
nodes, as automatically configured by Nova Network.

Any thoughts? Are we trying to configure a scenario not supported by
OpenStack?

Thanks,
--Scott


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Anybody?
Thanks,
--Scott

On Fri, Aug 22, 2014 at 11:06 AM, Scott Severtson <
ssevertson@digitalmeasures.com> wrote:

All,

We're trying to configure the following scenario - Compute nodes with
multiple physical NICs, each dedicated to a specific function/subnet:

Management/Private: 10.96.32.0/24
Storage: 10.96.48.0/24
External/Floating/DMZ: 10.96.16.0/24

We currently have two Nova Flat DHCP Networks (not using Neutron due to
lack of multi-host support) configured for both Storage and Management, and
are able to get appropriate connectivity in our VMs on each of these
subnets.

However, when we try to assign a floating IP to a VM in the External
subnet, we see problematic routing of packets. Packets reach the VM, the VM
responds, and then the response packets are often routed back out the
Management subnet. The behavior is inconsistent: some VMs can reliably
route packets back out the External NIC/subnet, and everything works; other
VMs consistently respond via the Management subnet; still others seem to
flip-flop between responding over the External and Management subnets.

When packets are sent over the incorrect NIC, our switches drop them, as
we do not allow routing between subnets.

How we ensure that outbound/response packets from a VM are routed over the
NIC that originally received the request packets in the first place?
Connection Tracking is specified in our IP Tables rules on the Compute
nodes, as automatically configured by Nova Network.

Any thoughts? Are we trying to configure a scenario not supported by
OpenStack?

Thanks,
--Scott


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Swift Ring Maintenance

All,

I want to reconfigure a number of disks in my Swift storage cluster to reside in different zones, and I’m unsure of the best way to accomplish this.

One way would be to set the drive weights to 0 and wait for data to migrate off the drives, then remove the drive from their current zone and re-add the drive to the new zone, rebalance and push the new ring files out to the cluster.

Or I could simply remove the drives, re-add the drives to their new zones, rebalance and push out the updated ring files.

Is one approach better than the other, or is there a better way than I’ve outlined above? Since any approach would be performed over a weekend, I’m not concerned about the effects of cluster performance as partitions are shuffled around.

Thoughts and inputs are welcome.

Thanks,
Ross


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

You've actually identified the issues involved. Here's a writeup on how you can do it, and the general best-practice for capacity management in Swift:

https://swiftstack.com/blog/2012/04/09/swift-capacity-management/

--John

On Aug 22, 2014, at 11:50 AM, Lillie Ross-CDSR11 Ross.Lillie@motorolasolutions.com wrote:

All,

I want to reconfigure a number of disks in my Swift storage cluster to reside in different zones, and I’m unsure of the best way to accomplish this.

One way would be to set the drive weights to 0 and wait for data to migrate off the drives, then remove the drive from their current zone and re-add the drive to the new zone, rebalance and push the new ring files out to the cluster.

Or I could simply remove the drives, re-add the drives to their new zones, rebalance and push out the updated ring files.

Is one approach better than the other, or is there a better way than I’ve outlined above? Since any approach would be performed over a weekend, I’m not concerned about the effects of cluster performance as partitions are shuffled around.

Thoughts and inputs are welcome.

Thanks,
Ross


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] How to config the Internet Accessing?

Hi,

I'm new to Openstack. Is there anyone can show me an simple step to config
the Internet Accessing?

Currently, I want the instance I created can be accessed (SSH) from external
network.

Probably, for my understanding, I need to make my instance possess a
floating ip which is accessible to outside.

But I have no idea for the detail configuration steps.

I would appreciate anyone's help.

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Xianyi

For acessing VM from external network you need to configure virtual router
first and then map virtual router with private network and your external
ntwork.Once it is done,then you need to allocate the VM a floating IP.

Also you need to create OVS bridge br-ex add a port from the
EXTERNAL_INTERFACE interface to br-ex interface and configure IP on
br-ex.Configure the EXTERNAL_INTERFACE without an IP address and in
promiscuous mode.

Hope this helps.

On Sat, Aug 23, 2014 at 1:03 PM, Xianyi Ye yexianyi@sina.com wrote:

Hi,

I’m new to Openstack. Is there anyone can show me an simple step to config
the Internet Accessing?

Currently, I want the instance I created can be accessed (SSH) from
external network.

Probably, for my understanding, I need to make my instance possess a
floating ip which is accessible to outside.

But I have no idea for the detail configuration steps.

I would appreciate anyone’s help.

Thanks!


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Associate network to one port

Hello,

I'm using openstack Havana release with neutron. Is it possible somehow to
associate more than one IP to one port in neutron. For example if I have
subnet /24 is it possible to associate to one port /27 or /28 from that
subnet? Is it possiblem somehow?

--
Thanks in advance,
Sławek Kapłoński
slawek@kaplonski.pl

--
Klucz GPG można pobrać ze strony:
http://kaplonski.pl/files/slawek_kaplonski.pub.key
--
My public GPG key can be downloaded from:
http://kaplonski.pl/files/slawek_kaplonski.pub.key_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

you can use allowed-pairs extension to assign more than one ip address
to a port

http://docs.openstack.org/api/openstack-network/2.0/content/allowed_address_pair_ext.html

On 08/23/2014 08:13 AM, Sławek Kapłoński wrote:
Hello,

I'm using openstack Havana release with neutron. Is it possible somehow to
associate more than one IP to one port in neutron. For example if I have
subnet /24 is it possible to associate to one port /27 or /28 from that
subnet? Is it possiblem somehow?

--
Thanks in advance,
Sławek Kapłoński
slawek@kaplonski.pl

--
Klucz GPG można pobrać ze strony:
http://kaplonski.pl/files/slawek_kaplonski.pub.key
--
My public GPG key can be downloaded from:
http://kaplonski.pl/files/slawek_kaplonski.pub.key


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

[Openstack] Fwd: nova instance

hello
i m new to openstack so need some help
wen ever i try to launch an instance it gives me the error
"*Error: * Failed to launch instance "nova_testing": Please try again later
[Error: No valid host was found. ]. "

i m even unable to find the log files.

can any body help me on this

thanks in advance


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

You can enable debug mode to "True" in nova.conf and check log message in
file compute.log, you will find some hint to continue investigation.

hello
i m new to openstack so need some help
wen ever i try to launch an instance it gives me the error
"*Error: * Failed to launch instance "nova_testing": Please try again later
[Error: No valid host was found. ]. "

i m even unable to find the log files.

can any body help me on this

thanks in advance


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Keystone and Horizon questions

Hello,
I had some keystone and Horizon questions, would apprecite it if
anyone can help:-)

a. Does Keystone run as a service scaled across multiple nodes? Since
it would need access to shared data, I dont understand how it scales.

b. If it does not run on multiple nodes as a service, how can it
handle a large number of concurrent requests?

c. Is it expected to be a service that is exposed to the clients on a
public cloud? Does Rackspace today do it or any other public cloud?

Some horizon questions also

a. Does Horizon allow customization for specific services? Is there
any way to add value on top of it for my own Nova or Swift or Cinder
view, can I do that or is it whatever is done by the community the
only thing available?

b. It seems like Horizon is basic right now, do vendors provide their
own GUI for customers?

c. Is there a hardware monitoring also as part of Horizon? For
servers? For storage arrays? Is there a standardized agent or
something that runs on hardware?

MW


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Excerpts from Marcus White's message of 2014-08-23 09:55:24 -0700:

Hello,
I had some keystone and Horizon questions, would apprecite it if
anyone can help:-)

a. Does Keystone run as a service scaled across multiple nodes? Since
it would need access to shared data, I dont understand how it scales.

It stores data in various backends. The default is SQL, and many use
LDAP to link with existing ID stores.

It runs as a REST API in front of those data backends, and scales out
quite nicely like any other web application.

b. If it does not run on multiple nodes as a service, how can it
handle a large number of concurrent requests?

c. Is it expected to be a service that is exposed to the clients on a
public cloud? Does Rackspace today do it or any other public cloud?

Yes it is expected to be exposed. Users authenticate to keystone and are
given a token which can be used to grant access to the other OpenStack
services.

Some horizon questions also

a. Does Horizon allow customization for specific services? Is there
any way to add value on top of it for my own Nova or Swift or Cinder
view, can I do that or is it whatever is done by the community the
only thing available?

It's open source. Go wild. ;) I believe there are hooks for
skins/themes, and one can add plugins for other functionality.

b. It seems like Horizon is basic right now, do vendors provide their
own GUI for customers?

See horizon.hpcloud.net for a public cloud provider (disclosure: my
employer) using Horizon.

c. Is there a hardware monitoring also as part of Horizon? For
servers? For storage arrays? Is there a standardized agent or
something that runs on hardware?

Horizon isn't really about "servers" and "arrays". It is meant as the
interface for the users of the cloud, not the operators.

[Openstack] Keystone with ad

Quick note to check if anyone has any tips for icehouse for keystone to use ad on the backend.

Thanks

Inviato da iPhone ()


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Remo,

A few interesting articles:

http://behindtheracks.com/2013/08/openstack-active-directory-integration/

http://behindtheracks.com/openstack-active-directory-integration-my-icehous
e-ldap-objects/

~~shane

Sr. Principal Infrastructure Architect
Symantec Cloud Platform Engineering

On 8/23/14, 12:30 PM, "Remo Mattei" remo@italy1.com wrote:

Quick note to check if anyone has any tips for icehouse for keystone to
use ad on the backend.

Thanks

Inviato da iPhone ()


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] should instances be able to ping each other through a router?

I have the below heat template instantiated.

Each instance (h1/h2/h3) can ping out to the world just fine.
In addition, h1 can ping h2 & h3, and they it. e.g. everyone can ping
everyone on its own subnet.
But h2 and h3 cannot ping each other (this is a routing function
rather than local net).

I am using vxlan with neutron, ovs ml2 on icehouse ubuntu 14.04.

I have port_security disabled (and iptables -L shows this to be true
in the router namespace).

what is happening is the ping hits the router port, and stops.
e.g. 172.16.1.X sends ICMP to 172.16.2.1, and its never seen again.

Should I be expecting this to work? It seems that this should not be
an SNAT issue, its all inside my private networking space.

From the host, if I 'ip netns exec qrouter-<...>' i can ping each
interface inside each vm, so i know the host can reach them.

So, uh, suggestions on how to debug this? My 'trusty' image below is
ubuntu 14.04, but it also happens w/ cirros fwiw.

----------------------------
heattemplateversion: 2013-05-23

description: >

resources:
key:
type: OS::Nova::KeyPair
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-key' } }
save
private_key: True

rtr:
type: OS::Neutron::Router
properties:
adminstateup: True
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-rtr' } }
external
gateway_info:
network: "ext-net"

ctrlnet:
type: OS::Neutron::Net
properties:
name: { str
replace: { params: { $stackname: { getparam:
'OS::stackname' } }, template: '$stackname-data-ctrl-net' } }

ctrlsubnet:
type: OS::Neutron::Subnet
properties:
name: { str
replace: { params: { $stackname: { getparam:
'OS::stackname' } }, template: '$stackname-data-ctrl-subnet' } }
enabledhcp: True
network
id: { getresource: ctrlnet }
cidr: 172.16.1/24
allocation_pools:
- start: 172.16.1.10
end: 172.16.1.254

routeri0:
type: OS::Neutron::RouterInterface
properties:
router
id: { getresource: rtr }
subnet
id: { getresource: ctrlsubnet }

routeri1:
type: OS::Neutron::RouterInterface
properties:
router
id: { getresource: rtr }
subnet
id: { getresource: dataint_subnet }

intnet:
type: OS::Neutron::Net
properties:
name: { str
replace: { params: { $stackname: { getparam:
'OS::stackname' } }, template: '$stackname-int-net' } }

dataintsubnet:
type: OS::Neutron::Subnet
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-data-int-subnet' } }
enable
dhcp: True
networkid: { getresource: intnet }
cidr: 172.16.2/24
allocation
pools:
- start: 172.16.2.10
end: 172.16.2.254

h1:
type: OS::Nova::Server
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-h1' } }
key
name: { getresource: key }
image: "trusty"
flavor: "m1.tiny"
config
drive: "true"
networks:
- network: { getresource: ctrlnet }
- network: { getresource: intnet }
userdataformat: RAW
user_data: |
#!/bin/bash
ifup eth1
dhclient eth1

h2:
type: OS::Nova::Server
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-h2' } }
key
name: { getresource: key }
image: "trusty"
flavor: "m1.tiny"
config
drive: "true"
networks:
- network: { getresource: ctrlnet }

h3:
type: OS::Nova::Server
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-h3' } }
key
name: { getresource: key }
image: "trusty"
flavor: "m1.tiny"
config
drive: "true"
networks:
- network: { getresource: intnet }

outputs:
key:
description: The private key to login to these images with
(try heat output-show key | sed -e 's?"??g' -e 's?\n?\n?g' >
~/.ssh/rsa)
value: { get
attr: [ key, private_key] }

----------------------------


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Sorry to follow-up my own question, but I find that the ICMP echo gets
to the host on which the destination instance is running, and, of the
following interfaces, all but the 'tap*' interface sees it.

qbr23bbb27b-2f Link encap:Ethernet HWaddr 26:cb:b8:4c:12:1c
qvb23bbb27b-2f Link encap:Ethernet HWaddr 26:cb:b8:4c:12:1c
qvo23bbb27b-2f Link encap:Ethernet HWaddr 12:e1:8a:e6:22:69
tap23bbb27b-2f Link encap:Ethernet HWaddr fe:16:3e:5a:39:d9

so i guess i need to understand why the q? -> tap path drops my ICMP echo.

On 23 August 2014 21:09, Don Waterloo don.waterloo@gmail.com wrote:
I have the below heat template instantiated.

Each instance (h1/h2/h3) can ping out to the world just fine.
In addition, h1 can ping h2 & h3, and they it. e.g. everyone can ping
everyone on its own subnet.
But h2 and h3 cannot ping each other (this is a routing function
rather than local net).

I am using vxlan with neutron, ovs ml2 on icehouse ubuntu 14.04.

I have port_security disabled (and iptables -L shows this to be true
in the router namespace).

what is happening is the ping hits the router port, and stops.
e.g. 172.16.1.X sends ICMP to 172.16.2.1, and its never seen again.

Should I be expecting this to work? It seems that this should not be
an SNAT issue, its all inside my private networking space.

From the host, if I 'ip netns exec qrouter-<...>' i can ping each
interface inside each vm, so i know the host can reach them.

So, uh, suggestions on how to debug this? My 'trusty' image below is
ubuntu 14.04, but it also happens w/ cirros fwiw.

----------------------------
heattemplateversion: 2013-05-23

description: >

resources:
key:
type: OS::Nova::KeyPair
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-key' } }
save
private_key: True

rtr:
type: OS::Neutron::Router
properties:
adminstateup: True
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-rtr' } }
external
gateway_info:
network: "ext-net"

ctrlnet:
type: OS::Neutron::Net
properties:
name: { str
replace: { params: { $stackname: { getparam:
'OS::stackname' } }, template: '$stackname-data-ctrl-net' } }

ctrlsubnet:
type: OS::Neutron::Subnet
properties:
name: { str
replace: { params: { $stackname: { getparam:
'OS::stackname' } }, template: '$stackname-data-ctrl-subnet' } }
enabledhcp: True
network
id: { getresource: ctrlnet }
cidr: 172.16.1/24
allocation_pools:
- start: 172.16.1.10
end: 172.16.1.254

routeri0:
type: OS::Neutron::RouterInterface
properties:
router
id: { getresource: rtr }
subnet
id: { getresource: ctrlsubnet }

routeri1:
type: OS::Neutron::RouterInterface
properties:
router
id: { getresource: rtr }
subnet
id: { getresource: dataint_subnet }

intnet:
type: OS::Neutron::Net
properties:
name: { str
replace: { params: { $stackname: { getparam:
'OS::stackname' } }, template: '$stackname-int-net' } }

dataintsubnet:
type: OS::Neutron::Subnet
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-data-int-subnet' } }
enable
dhcp: True
networkid: { getresource: intnet }
cidr: 172.16.2/24
allocation
pools:
- start: 172.16.2.10
end: 172.16.2.254

h1:
type: OS::Nova::Server
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-h1' } }
key
name: { getresource: key }
image: "trusty"
flavor: "m1.tiny"
config
drive: "true"
networks:
- network: { getresource: ctrlnet }
- network: { getresource: intnet }
userdataformat: RAW
user_data: |
#!/bin/bash
ifup eth1
dhclient eth1

h2:
type: OS::Nova::Server
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-h2' } }
key
name: { getresource: key }
image: "trusty"
flavor: "m1.tiny"
config
drive: "true"
networks:
- network: { getresource: ctrlnet }

h3:
type: OS::Nova::Server
properties:
name: { strreplace: { params: { $stackname: { getparam:
'OS::stack
name' } }, template: '$stackname-h3' } }
key
name: { getresource: key }
image: "trusty"
flavor: "m1.tiny"
config
drive: "true"
networks:
- network: { getresource: intnet }

outputs:
key:
description: The private key to login to these images with
(try heat output-show key | sed -e 's?"??g' -e 's?\n?\n?g' >
~/.ssh/rsa)
value: { get
attr: [ key, private_key] }

----------------------------


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] [openstack-dev] [openstack][DOUBT]Please clarify

Please don't take conversations off-list as others may benefit from your
questions and I don't have all the answers. Adding openstack as the list as
this is not an openstack-dev discussion about the future of OpenStack. More
below.

On Fri, Aug 22, 2014 at 4:31 AM, Sharath V vsharathis@gmail.com wrote:

Hi Anne ,

Thanks a lot for your help,

I have small doubts mentioned below.

  1. Why we need to separate controller,Compute and Network node .

For scalability, performance, availability and probably other reasons
related to keeping the cloud running day after day. The diagrams in the
install guide are examples to get you started. The first is using
nova-network the second offers an example with neutron.
http://docs.openstack.org/icehouse/install-guide/install/apt/content/ch_overview.html#example-architecture-with-neutron-networking

2) As per above diagram, why we need network interface for each
controller, may i know the what is the use of it ?

I believe it's to connect the external network to the internal network
(between VMs). Also you may want to provide separate endpoints for admin
API actions hence the management interface. These descriptions are for when
you consider operating a cloud for many users. For a proof of concept you
don't have to worry about a management network interface, for instance.

3) When i reading document about controller node , their i found below
mentioned statement
"Optionally, the controller node also runs portions of Block Storage,
Object Storage,
Database Service, Orchestration, and Telemetry. These components provide
additional
features for your environment."

may i know whats mean by *"Portion" *services ? then where complete
services will run ?

Portions means "parts" or "pieces" here. For reliability, security,
availability and other reasons you run some daemons and services on
different nodes.

You would consider your users needs and determine which services they want.
You can find more examples of what architectures and designs are for
different clouds in the Architecture Design Guide. Here are prescriptive
examples:
http://docs.openstack.org/arch-design/content/prescriptive-example-online-classifieds.html

http://docs.openstack.org/arch-design/content/prescriptive-example-compute-focus.html

http://docs.openstack.org/arch-design/content/prescriptive-example-storage-focus.html

http://docs.openstack.org/arch-design/content/prescriptive-example-large-scale-web-app.html

http://docs.openstack.org/arch-design/content/prescriptive-example-multisite.html

http://docs.openstack.org/arch-design/content/prescriptive-examples-multi-cloud.html

http://docs.openstack.org/arch-design/content/massively_scalable.html

http://docs.openstack.org/arch-design/content/specialized.html

You should figure out what your goals are and then choose an architecture.
If it's a proof-of-concept and you have the hardware, the install guide
should meet the needs. If you need an all-in-one to try out, see RDO
(packstack), stackgeek, or devstack.

if its a client and server how communication will happen between the
services ?

Communication happens over RPC with remote procedure calls and other ways
such as reading/writing database information.

may i know your IRC ?

We have established support channels in http://ask.openstack.org and the
OpenStack mailing list so you don't need to reach out to me directly.

Anne

Thanks in advance.

BR,
Sharath

On Fri, Aug 22, 2014 at 9:21 AM, Anne Gentle anne@openstack.org wrote:

On Thu, Aug 21, 2014 at 2:11 AM, Sharath V vsharathis@gmail.com wrote:

Dear Friends, Have an doubt, please clarify me .!! When i start
understanding openstack , There are three nodes a) controller node
b)Compute node c) Network node
i) as my understanding controller node contains all the components like
nova,neutron cinder,glance,swift,Horizon etc

ii) Compute node is nova and neutron but not all components.

iii) Network node is nova and neutron.

This three node description is for the install guide, where our goal is
to get you to be able to launch an instance or store an object for example.
For running a real production cloud there are many more considerations. I'd
suggest reading the Operations Guide first, such as
http://docs.openstack.org/openstack-ops/content/cloud_controller_design.html
which says that the cloud controller is just a simplification.

when i reading doc , they said like openstack compute (Controller
Services) , openstack network services (Cloud controller) , can you please
clarify is each and every component of openstack has controller and client?
[like Nova Service(Controller) - Nova Client, Neutron Service (Controller)
- neutron client, cinder controller - cinder client ] or (nova controller
for compute , nova-network for cloud controller),

There's a much more detailed description of each service in
http://docs.openstack.org/admin-guide-cloud/content/compute-service.html

Is Nova only controller , if nova is only a controller it must be act
as orchestration right? if yes then why we have to use heat for
orchestration ?

The nova project works on launching instances, scheduling which host it
launches to, providing the REST API service, allocating network and storage
resources to VMs. I have seen orchestration used for these collective
activities. When you want to orchestrate several cloud resources in order
to run an application such as WordPress on a virtual platform, then you
orchestrate the application with the heat project. Read more here:
http://docs.openstack.org/admin-guide-cloud/content/orchestration-service.html

Hope this helps you dig deeper into the documentation.

Anne

If any thing wrong , please clarify me,

if you have any document or guide please route to me.

Thank you in advance,

--
Best Regards,
Sharath


OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards,
Sharath


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [TripleO][undercloud][Horizon] Horizon dashboard not available on webpage

Hi, All:
I've deployed one undercloud on baremetal using tripleO devtest.
But when I acces horzion dashboard on webpage, I got error in apache log,
please help:

[Mon Aug 25 05:01:34.749209 2014] [authzcore:debug] [pid 8109:tid
3053452096] mod
authzcore.c(802): [client 72.163.255.76:57077] AH01626:
authorization result of Require all granted: granted
[Mon Aug 25 05:01:34.749249 2014] [authz
core:debug] [pid 8109:tid
3053452096] modauthzcore.c(802): [client 72.163.255.76:57077] AH01626:
authorization result of : granted
[Mon Aug 25 05:01:34.749338 2014] [authzcore:debug] [pid 8109:tid
3053452096] mod
authzcore.c(802): [client 72.163.255.76:57077] AH01626:
authorization result of Require all granted: granted
[Mon Aug 25 05:01:34.749350 2014] [authz
core:debug] [pid 8109:tid
3053452096] modauthzcore.c(802): [client 72.163.255.76:57077] AH01626:
authorization result of : granted
[Mon Aug 25 05:01:34.749611 2014] [:info] [pid 8106:tid 3046533952] [remote
72.163.255.76:51894] modwsgi (pid=8106, process='horizon',
application=''): Loading WSGI script
'/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack
dashboard/wsgi/django.wsgi'.
[Mon Aug 25 05:01:35.046323 2014] [:error] [pid 8106:tid 3046533952]
...osprofiler.messaging
[Mon Aug 25 05:01:35.046432 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] modwsgi (pid=8106): Exception occurred
processing WSGI script
'/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack
dashboard/wsgi/django.wsgi'.
[Mon Aug 25 05:01:35.046464 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] Traceback (most recent call last):
[Mon Aug 25 05:01:35.046488 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/wsgi.py",
line 187, in call
[Mon Aug 25 05:01:35.046592 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] self.loadmiddleware()
[Mon Aug 25 05:01:35.046615 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/base.py",
line 44, in load
middleware
[Mon Aug 25 05:01:35.046738 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] for middlewarepath in
settings.MIDDLEWARE
CLASSES:
[Mon Aug 25 05:01:35.046761 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/init.py",
line 54, in getattr
[Mon Aug 25 05:01:35.046855 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] self.setup(name)
[Mon Aug 25 05:01:35.046875 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/init.py",
line 49, in _setup
[Mon Aug 25 05:01:35.046897 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] self.
wrapped = Settings(settingsmodule)
[Mon Aug 25 05:01:35.046914 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/init.py",
line 128, in init
[Mon Aug 25 05:01:35.046934 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] mod =
importlib.import
module(self.SETTINGSMODULE)
[Mon Aug 25 05:01:35.046949 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/utils/importlib.py",
line 40, in import
module
[Mon Aug 25 05:01:35.047001 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] import(name)
[Mon Aug 25 05:01:35.047020 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstackdashboard/wsgi/../../openstackdashboard/settings.py",
line 28, in
[Mon Aug 25 05:01:35.047141 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] from openstackdashboard import exceptions
[Mon Aug 25 05:01:35.047161 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack
dashboard/wsgi/../../openstackdashboard/exceptions.py",
line 22, in
[Mon Aug 25 05:01:35.047222 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] from keystoneclient import exceptions as
keystoneclient
[Mon Aug 25 05:01:35.047241 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack
dashboard/wsgi/../../keystoneclient/init.py",
line 28, in
[Mon Aug 25 05:01:35.047294 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] from keystoneclient import client
[Mon Aug 25 05:01:35.047312 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstackdashboard/wsgi/../../keystoneclient/client.py",
line 13, in
[Mon Aug 25 05:01:35.047369 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] from keystoneclient import discover
[Mon Aug 25 05:01:35.047387 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack
dashboard/wsgi/../../keystoneclient/discover.py",
line 19, in
[Mon Aug 25 05:01:35.047489 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] from keystoneclient import session as
clientsession
[Mon Aug 25 05:01:35.047508 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack
dashboard/wsgi/../../keystoneclient/session.py",
line 27, in
[Mon Aug 25 05:01:35.047710 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] osprofilerweb =
importutils.try
import("osprofiler.web")
[Mon Aug 25 05:01:35.047731 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstackdashboard/wsgi/../../keystoneclient/openstack/common/importutils.py",
line 71, in try
import
[Mon Aug 25 05:01:35.047794 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] return importmodule(importstr)
[Mon Aug 25 05:01:35.047813 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstackdashboard/wsgi/../../keystoneclient/openstack/common/importutils.py",
line 57, in import
module
[Mon Aug 25 05:01:35.047835 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] import(importstr)
[Mon Aug 25 05:01:35.047850 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack
dashboard/wsgi/../../osprofiler/init.py",
line 23, in
[Mon Aug 25 05:01:35.047904 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894]
utils.importmodulesfrompackage("osprofiler.notifiers")
[Mon Aug 25 05:01:35.047924 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] File
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstackdashboard/wsgi/../../osprofiler/utils.py",
line 173, in importmodulesfrompackage
[Mon Aug 25 05:01:35.048009 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] import(module
name)
[Mon Aug 25 05:01:35.048037 2014] [:error] [pid 8106:tid 3046533952]
[remote 72.163.255.76:51894] ValueError: Empty module name

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow http://weibo.com/herewearenow

--------------


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] nova-network and loadbalancers

Hi,
Is there any way to make HA proxy load balancer work with
nova-network,worried about fail over ip conflicting with IP/MAC stealing
rule.

trying to configure 2 HA proxy with keepalived and a fail over IP.

OR

Is there a load balancer that can actually work with nova-network

Thanks,


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] HA(High Availability ) mode - OpenStack

Hi,

Can anyone suggest me where I can get information about how HA is implement
is compute node ? I know that minimum 3 controllers are required to
configure HA mode but my question is, if I add more compute nodes as well
will my cluster be more stable? Are the VM s replicated over compute nodes?

Can anyone provide information on this?

I am quite new to OpenStack and to mailing list. Any help is appreciated .
Thank you,


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,

so if you scale your compute nodes it's not more stable but you have a
better distribution of VM's. So if one compute node goes down it will
not effect a mass of VM's only these VM's they're running on there.

You can use availability zones or aggregates to have a better
distribution of your VM's.
In addition you should use a shared storage to storage the compute
base images like root disc and ephemeral disc to use live migration
and host evacute.

Cheers
Heiko

On 25.08.2014 10:43, Gowri LN wrote:
Hi,

Can anyone suggest me where I can get information about how HA is
implement is compute node ? I know that minimum 3 controllers are
required to configure HA mode but my question is, if I add more
compute nodes as well will my cluster be more stable? Are the VM s
replicated over compute nodes?

Can anyone provide information on this?

I am quite new to OpenStack and to mailing list. Any help is
appreciated . Thank you,

_______________________________________________ Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post
to : openstack@lists.openstack.org Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Anynines.com

B.Sc. Informatik
CIO
Heiko Krämer

Twitter: @anynines


Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
Sitz: Saarbrücken
Avarteq GmbH
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJT+v9TAAoJELxFogM4ixOFEGsH/03VlDnXdpixTX43SVKWm69c
JisYtcOQA6Y6++l61smVG48nJ8WiRnhgdi1J4fu6xnCAP/k/MBE54U40O2iEwjbM
e/2rfAW+buxzfZKjZeSNAzS0VIpNrA5R1UFYbNqDJnqz0FOndaf+1inoW9n+R0jB
s+Engc+DhXB5ZPuSg3c7SO63t5BhB1FzkMKyjd4HLdhuhfh640/4EdcKgElwRZ4e
a5mb9k0SCAmyQ4R8ngWiYV2/cmQY8N/zqbXD8tMMuQhj11VgtOjmQK/YbJe+MFSf
de15iBpmKmpxK3yslHwYIDVJ9i+lrzJ6F67Ta9vbGSXy8iTv9IA7RhD55Lufp/c=
=0Sr6
-----END PGP SIGNATURE-----

[Openstack] Icehouse ML2 + OVS security group problems

Hi,

I've managed to set up every other component, but neutron security
groups dont want to work. I have connectivity between all machines but
nothing ever hits iptables rules.

I see that on compute nodes I get correct firewall rules:

:neutron-openvswi-ic2c7ef23-2 - [0:0]
:neutron-openvswi-oc2c7ef23-2 - [0:0]
:neutron-openvswi-sc2c7ef23-2 - [0:0]
-A neutron-openvswi-FORWARD -m physdev --physdev-out tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-oc2c7ef23-2
-A neutron-openvswi-ic2c7ef23-2 -m state --state INVALID -j DROP
-A neutron-openvswi-ic2c7ef23-2 -m state --state RELATED,ESTABLISHED -j RETURN
-A neutron-openvswi-ic2c7ef23-2 -p tcp -m tcp --dport 22 -j RETURN
-A neutron-openvswi-ic2c7ef23-2 -s 10.3.0.2/32 -p udp -m udp --sport 67 --dport 68 -j RETURN
-A neutron-openvswi-ic2c7ef23-2 -s 10.3.0.4/32 -p udp -m udp --sport 67 --dport 68 -j RETURN
-A neutron-openvswi-ic2c7ef23-2 -j neutron-openvswi-sg-fallback
-A neutron-openvswi-oc2c7ef23-2 -p udp -m udp --sport 68 --dport 67 -j RETURN
-A neutron-openvswi-oc2c7ef23-2 -j neutron-openvswi-sc2c7ef23-2
-A neutron-openvswi-oc2c7ef23-2 -p udp -m udp --sport 67 --dport 68 -j DROP
-A neutron-openvswi-oc2c7ef23-2 -m state --state INVALID -j DROP
-A neutron-openvswi-oc2c7ef23-2 -m state --state RELATED,ESTABLISHED -j RETURN
-A neutron-openvswi-oc2c7ef23-2 -p tcp -m tcp --dport 22 -j RETURN
-A neutron-openvswi-oc2c7ef23-2 -j neutron-openvswi-sg-fallback
-A neutron-openvswi-sc2c7ef23-2 -s 10.3.0.5/32 -m mac --mac-source FA:16:3E:F5:ED:16 -j RETURN
-A neutron-openvswi-sc2c7ef23-2 -j DROP
-A neutron-openvswi-sg-chain -m physdev --physdev-out tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-ic2c7ef23-2
-A neutron-openvswi-sg-chain -m physdev --physdev-in tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-oc2c7ef23-2

and openvswitch config also seems ok:

97e21921-f8e5-4156-8f9b-b976bc6ed278
Bridge br-int
failmode: secure
Port int-vm
stmgmt
Interface int-vm
st_mgmt
....
Port "qvoc2c7ef23-2d"
tag: 4
Interface "qvoc2c7ef23-2d"
Port "qvo50e4e17b-ea"
tag: 3
Interface "qvo50e4e17b-ea"
...

and I also see it as linux bridge:
~☠ brctl show qbrc2c7ef23-2d
bridge name bridge id STP enabled interfaces
qbrc2c7ef23-2d 8000.1a3cb28c1f78 no qvbc2c7ef23-2d
tapc2c7ef23-2d

Yet no packet ever hits IPTables rules. tunneling works fine, I can make any connection between all machines, DHCP/L3 works, I can see traffic on tap

Chain neutron-openvswi-INPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 neutron-openvswi-o5c1b8fd3-0 all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap5c1b8fd3-04 --physdev-is-bridged
0 0 neutron-openvswi-oeece6804-f all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tapeece6804-f4 --physdev-is-bridged
0 0 neutron-openvswi-oc2c7ef23-2 all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tapc2c7ef23-2d --physdev-is-bridged
0 0 neutron-openvswi-o50e4e17b-e all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap50e4e17b-ea --physdev-is-bridged
0 0 neutron-openvswi-o19204ab8-4 all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap19204ab8-4d --physdev-is-bridged
0 0 neutron-openvswi-o187624fb-e all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap187624fb-e4 --physdev-is-bridged

Chain INPUT (policy ACCEPT 86M packets, 79G bytes)
pkts bytes target prot opt in out source destination
86M 79G neutron-openvswi-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0

My configuration:

kernel 3.15.7-1.el6.elrepo.x86_64

☠ rpm -qa |grep -P '(nova|neutron)'
openstack-neutron-2014.1.2-1.el6.noarch
openstack-nova-compute-2014.1.1-3.el6.noarch
python-nova-2014.1.1-3.el6.noarch
python-novaclient-2.17.0-2.el6.noarch
python-neutronclient-2.3.4-1.el6.noarch
openstack-nova-common-2014.1.1-3.el6.noarch
python-neutron-2014.1.2-1.el6.noarch
openstack-neutron-openvswitch-2014.1.2-1.el6.noarch

nova.conf:

vifdriver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver # tried with legacy OVS one, didnt help
linuxnet
interfacedriver = nova.network.linuxnet.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver

ovsneutronplugin:

[securitygroup]

firewalldriver = neutron.agent.linux.iptablesfirewall.OVSHybridIptablesFirewallDriver
enablesecuritygroup = True

[OVS]
enabletunneling=False
integration
bridge=br-int
localip=172.16.125.25
tunnel
bridge=br-tun
tunneltype=vxlan
tenant
networktype=vxlan
tunnel
idranges=8192:16384
bridge
mappings=vmstmgmt:vmstmgmt

[AGENT]
pollinginterval=2
tunnel
types=vxlan

neutron plugin.ini:
[ml2]
tenantnetworktypes = vxlan
mechanism_drivers =openvswitch,linuxbridge

[ml2typevxlan]

vni_ranges =8192:16384

[securitygroup]

Controls if neutron security group is enabled or not.

It should be false when you use nova security group.

enablesecuritygroup = True

enablesecuritygroup = True
firewalldriver=neutron.agent.linux.iptablesfirewall.OVSHybridIptablesFirewallDriver

I attached dumps from iptables/ovs/brctl

--
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczewski@efigence.com
mariusz.gronczewski@efigence.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

If anyone had similiar problem; CentOS 6 have retarded default settings
in /etc/sysctl.conf that disable iptables on bridges, change it to

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1

and it will work

On Mon, 25 Aug 2014 11:05:45 +0200, Mariusz Gronczewski
mariusz.gronczewski@efigence.com wrote:

Hi,

I've managed to set up every other component, but neutron security
groups dont want to work. I have connectivity between all machines but
nothing ever hits iptables rules.

I see that on compute nodes I get correct firewall rules:

:neutron-openvswi-ic2c7ef23-2 - [0:0]
:neutron-openvswi-oc2c7ef23-2 - [0:0]
:neutron-openvswi-sc2c7ef23-2 - [0:0]
-A neutron-openvswi-FORWARD -m physdev --physdev-out tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-oc2c7ef23-2
-A neutron-openvswi-ic2c7ef23-2 -m state --state INVALID -j DROP
-A neutron-openvswi-ic2c7ef23-2 -m state --state RELATED,ESTABLISHED -j RETURN
-A neutron-openvswi-ic2c7ef23-2 -p tcp -m tcp --dport 22 -j RETURN
-A neutron-openvswi-ic2c7ef23-2 -s 10.3.0.2/32 -p udp -m udp --sport 67 --dport 68 -j RETURN
-A neutron-openvswi-ic2c7ef23-2 -s 10.3.0.4/32 -p udp -m udp --sport 67 --dport 68 -j RETURN
-A neutron-openvswi-ic2c7ef23-2 -j neutron-openvswi-sg-fallback
-A neutron-openvswi-oc2c7ef23-2 -p udp -m udp --sport 68 --dport 67 -j RETURN
-A neutron-openvswi-oc2c7ef23-2 -j neutron-openvswi-sc2c7ef23-2
-A neutron-openvswi-oc2c7ef23-2 -p udp -m udp --sport 67 --dport 68 -j DROP
-A neutron-openvswi-oc2c7ef23-2 -m state --state INVALID -j DROP
-A neutron-openvswi-oc2c7ef23-2 -m state --state RELATED,ESTABLISHED -j RETURN
-A neutron-openvswi-oc2c7ef23-2 -p tcp -m tcp --dport 22 -j RETURN
-A neutron-openvswi-oc2c7ef23-2 -j neutron-openvswi-sg-fallback
-A neutron-openvswi-sc2c7ef23-2 -s 10.3.0.5/32 -m mac --mac-source FA:16:3E:F5:ED:16 -j RETURN
-A neutron-openvswi-sc2c7ef23-2 -j DROP
-A neutron-openvswi-sg-chain -m physdev --physdev-out tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-ic2c7ef23-2
-A neutron-openvswi-sg-chain -m physdev --physdev-in tapc2c7ef23-2d --physdev-is-bridged -j neutron-openvswi-oc2c7ef23-2

and openvswitch config also seems ok:

97e21921-f8e5-4156-8f9b-b976bc6ed278
Bridge br-int
failmode: secure
Port int-vm
stmgmt
Interface int-vm
st_mgmt
....
Port "qvoc2c7ef23-2d"
tag: 4
Interface "qvoc2c7ef23-2d"
Port "qvo50e4e17b-ea"
tag: 3
Interface "qvo50e4e17b-ea"
...

and I also see it as linux bridge:
~☠ brctl show qbrc2c7ef23-2d
bridge name bridge id STP enabled interfaces
qbrc2c7ef23-2d 8000.1a3cb28c1f78 no qvbc2c7ef23-2d
tapc2c7ef23-2d

Yet no packet ever hits IPTables rules. tunneling works fine, I can make any connection between all machines, DHCP/L3 works, I can see traffic on tap

Chain neutron-openvswi-INPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 neutron-openvswi-o5c1b8fd3-0 all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap5c1b8fd3-04 --physdev-is-bridged
0 0 neutron-openvswi-oeece6804-f all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tapeece6804-f4 --physdev-is-bridged
0 0 neutron-openvswi-oc2c7ef23-2 all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tapc2c7ef23-2d --physdev-is-bridged
0 0 neutron-openvswi-o50e4e17b-e all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap50e4e17b-ea --physdev-is-bridged
0 0 neutron-openvswi-o19204ab8-4 all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap19204ab8-4d --physdev-is-bridged
0 0 neutron-openvswi-o187624fb-e all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap187624fb-e4 --physdev-is-bridged

Chain INPUT (policy ACCEPT 86M packets, 79G bytes)
pkts bytes target prot opt in out source destination
86M 79G neutron-openvswi-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0

My configuration:

kernel 3.15.7-1.el6.elrepo.x86_64

☠ rpm -qa |grep -P '(nova|neutron)'
openstack-neutron-2014.1.2-1.el6.noarch
openstack-nova-compute-2014.1.1-3.el6.noarch
python-nova-2014.1.1-3.el6.noarch
python-novaclient-2.17.0-2.el6.noarch
python-neutronclient-2.3.4-1.el6.noarch
openstack-nova-common-2014.1.1-3.el6.noarch
python-neutron-2014.1.2-1.el6.noarch
openstack-neutron-openvswitch-2014.1.2-1.el6.noarch

nova.conf:

vifdriver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver # tried with legacy OVS one, didnt help
linuxnet
interfacedriver = nova.network.linuxnet.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver

ovsneutronplugin:

[securitygroup]

firewalldriver = neutron.agent.linux.iptablesfirewall.OVSHybridIptablesFirewallDriver
enablesecuritygroup = True

[OVS]
enabletunneling=False
integration
bridge=br-int
localip=172.16.125.25
tunnel
bridge=br-tun
tunneltype=vxlan
tenant
networktype=vxlan
tunnel
idranges=8192:16384
bridge
mappings=vmstmgmt:vmstmgmt

[AGENT]
pollinginterval=2
tunnel
types=vxlan

neutron plugin.ini:
[ml2]
tenantnetworktypes = vxlan
mechanism_drivers =openvswitch,linuxbridge

[ml2typevxlan]

vni_ranges =8192:16384

[securitygroup]

Controls if neutron security group is enabled or not.

It should be false when you use nova security group.

enablesecuritygroup = True

enablesecuritygroup = True
firewalldriver=neutron.agent.linux.iptablesfirewall.OVSHybridIptablesFirewallDriver

I attached dumps from iptables/ovs/brctl

--
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczewski@efigence.com

[Openstack] How to specify the "container-format" when creating an image in Horizon?

With command “glance image-create", user can specify the “container-format” when creating an image:

stack@Controller:~/devstack$ glance help image-create

usage: glance image-create [--id ] [--name ] [--store ]

                       [--disk-format <DISK_FORMAT>]

                       [--container-format <CONTAINER_FORMAT>]

                       [--owner <TENANT_ID>] [--size <SIZE>]

                       [--min-disk <DISK_GB>] [--min-ram <DISK_RAM>]

                       [--location <IMAGE_URL>] [--file <FILE>]

                       [--checksum <CHECKSUM>] [--copy-from <IMAGE_URL>]

                       [--is-public {True,False}]

                       [--is-protected {True,False}]

                       [--property <key=value>] [--human-readable]

                       [--progress]

Create a new image.

Optional arguments:

--id ID of image to reserve.

--name Name of image.

--store Store to upload image to.

--disk-format

                    Disk format of image. Acceptable formats: ami, ari,

                    aki, vhd, vmdk, raw, qcow2, vdi, and iso.

--container-format

                    Container format of image. Acceptable formats: ami,

                    ari, aki, bare, and ovf.

Where is this option in Horizon?

Thanks,

Danny


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Glance does not really do anything with containerformat at the moment. It
is set to the same disk
format for the three Amazon image
types(ami,aki,ari), ovf is also covered in image format, otherwise it just
treats them as 'bare.' As such we will just set that to be that here
instead of bothering the user with asking them for information we can
already determine. Please have a look at
attached screenshot.

Best Regards,
Swapnil Kulkarni
irc : coolsvap

On Mon, Aug 25, 2014 at 6:16 PM, Danny Choi (dannchoi) dannchoi@cisco.com
wrote:

With command “glance image-create", user can specify the
“container-format” when creating an image:

stack@Controller:~/devstack$ glance help image-create

usage: glance image-create [--id ] [--name ] [--store
]

                       [--disk-format <DISK_FORMAT>]

                       [--container-format <CONTAINER_FORMAT>]

                       [--owner <TENANT_ID>] [--size <SIZE>]

                       [--min-disk <DISK_GB>] [--min-ram <DISK_RAM>]

                       [--location <IMAGE_URL>] [--file <FILE>]

                       [--checksum <CHECKSUM>] [--copy-from

]

                       [--is-public {True,False}]

                       [--is-protected {True,False}]

                       [--property <key=value>] [--human-readable]

                       [--progress]

Create a new image.

Optional arguments:

--id ID of image to reserve.

--name Name of image.

--store Store to upload image to.

--disk-format

                    Disk format of image. Acceptable formats: ami, ari,

                    aki, vhd, vmdk, raw, qcow2, vdi, and iso.

--container-format

                    Container format of image. Acceptable formats: ami,

                    ari, aki, bare, and ovf.

Where is this option in Horizon?

Thanks,

Danny


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] RabbitMQ issues since upgrading to Icehouse

Hi,

Since upgrading to Icehouse we have seen increased issues with messaging relating to RabbitMQ.

  1. We often get reply_xxxxxx queues starting to fill up with unacked messages. To fix this we need to restart the offending service. Usually nova-api or nova-compute.

  2. If you kill a node so as to force an ungraceful disconnect of rabbit the connection “object?” still sticks around in rabbit. Starting the service again means there are now 2 consumers. The new one and the phantom old one. This then leads to messages piling up in the unacked queue. This feels like a rabbit bug to me but just thought I’d mention it here too.

We have have a setup that includes icehouse computes and havana computes in the same cloud and we only see this on the icehouse computes. This is using Trusty and RabbitMQ 3.3.4

Has anyone seen anything like this too?

Thanks,
Sam


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Sam,

I've been tracking down other bugs with oslo.messaging + rabbit: My
problem was publishers may deliver messages to the void if rabbit restarts.
See https://bugs.launchpad.net/oslo.messaging/+bug/1338732 and
https://review.openstack.org/#/c/109373/.

Cheers,
--
Noel

On Mon, Aug 25, 2014 at 4:17 PM, Sam Morrison sorrison@gmail.com wrote:

Hi,

Since upgrading to Icehouse we have seen increased issues with messaging
relating to RabbitMQ.

  1. We often get reply_xxxxxx queues starting to fill up with unacked
    messages. To fix this we need to restart the offending service. Usually
    nova-api or nova-compute.

  2. If you kill a node so as to force an ungraceful disconnect of rabbit
    the connection “object?” still sticks around in rabbit. Starting the
    service again means there are now 2 consumers. The new one and the phantom
    old one. This then leads to messages piling up in the unacked queue. This
    feels like a rabbit bug to me but just thought I’d mention it here too.

We have have a setup that includes icehouse computes and havana computes
in the same cloud and we only see this on the icehouse computes. This is
using Trusty and RabbitMQ 3.3.4

Has anyone seen anything like this too?

Thanks,
Sam


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] No route to host

i had a installed vm from devstack.org script (stack.sh)
i had a running instance of cirros-0.3.2-x86_64-uec

my route -n command result is
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref
Use Iface
0.0.0.0 162.242.233.1 0.0.0.0 UG 100
0 0 eth0
10.0.0.0 172.24.4.2 255.255.255.0 UG 0
0 0 br-ex
10.176.0.0 10.176.160.1 255.240.0.0 UG 0
0 0 eth1
10.176.160.0 0.0.0.0 255.255.224.0 U 0
0 0 eth1
10.208.0.0 10.176.160.1 255.240.0.0 UG 0
0 0 eth1
162.242.233.0 0.0.0.0 255.255.255.0 U 0
0 0 eth0
172.24.4.0 162.242.233.1 255.255.255.0 UG 0
0 0 eth0
172.24.4.0 0.0.0.0 255.255.255.0 U 0
0 0 br-ex
192.168.122.0 0.0.0.0 255.255.255.0 U 0
0 0 virbr0
my nova list command shows the following result

+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State |
Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+
| 37eaa19c-3c65-4a4d-84b9-0d977c474a5c | instance1 | ACTIVE | - |
Running | private1=10.0.0.4, 172.24.4.6 |
+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+

i have a security policies group which includes
EgressIPv4Any-0.0.0.0/0 (CIDR)
EgressIPv6Any-::/0 (CIDR)

IngressIPv4ICMP-0.0.0.0/0 (CIDR)

IngressIPv4TCP22 (SSH)0.0.0.0/0 (CIDR)

but when i put the following command

ssh cirros@172.24.4.6
the system mind this command and gives the result as follow
"ssh: connect to host 172.24.4.6 port 22: No route to host"

i am also unable to ping the floating address

the problem is that this is a very good result but not acceptable for
me in this situation :-)

can anybody help me on this ?

thanks in advance


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

What is 10.0.0.0/24 network for and 172.24.4.2/24 network for ? I guess you
are using Neutron. That makes network complicated. I think you can try to
debug Neutron.
And why are you using two routes to the same network?:

172.24.4.0 162.242.233.1 255.255.255.0 UG 0
0 0 eth0172.24.4.0 0.0.0.0 255.255.255.0
U 0 0 0 br-ex

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
--------------

2014-08-26 16:06 GMT+08:00 javed alam javedamar@gmail.com:

i had a installed vm from devstack.org script (stack.sh)
i had a running instance of cirros-0.3.2-x86_64-uec

my route -n command result is
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref
Use Iface
0.0.0.0 162.242.233.1 0.0.0.0 UG 100
0 0 eth0
10.0.0.0 172.24.4.2 255.255.255.0 UG 0
0 0 br-ex
10.176.0.0 10.176.160.1 255.240.0.0 UG 0
0 0 eth1
10.176.160.0 0.0.0.0 255.255.224.0 U 0
0 0 eth1
10.208.0.0 10.176.160.1 255.240.0.0 UG 0
0 0 eth1
162.242.233.0 0.0.0.0 255.255.255.0 U 0
0 0 eth0
172.24.4.0 162.242.233.1 255.255.255.0 UG 0
0 0 eth0
172.24.4.0 0.0.0.0 255.255.255.0 U
0 0 0 br-ex
192.168.122.0 0.0.0.0 255.255.255.0 U 0
0 0 virbr0
my nova list command shows the following result

+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State |
Power State | Networks |

+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+
| 37eaa19c-3c65-4a4d-84b9-0d977c474a5c | instance1 | ACTIVE | - |
Running | private1=10.0.0.4, 172.24.4.6 |

+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+

i have a security policies group which includes
Egress IPv4Any-0.0.0.0/0 (CIDR)
EgressIPv6Any-::/0 (CIDR)

IngressIPv4ICMP-0.0.0.0/0 (CIDR)

IngressIPv4TCP22 (SSH)0.0.0.0/0 (CIDR)

but when i put the following command

ssh cirros@172.24.4.6
the system mind this command and gives the result as follow

"ssh: connect to host 172.24.4.6 port 22: No route to host"

i am also unable to ping the floating address

the problem is that this is a very good result but not acceptable for me in this situation :-)

can anybody help me on this ?

thanks in advance


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] blockdevicemapping cause long time detaching

Hi all,

I used a heat template to launch a server and attach cinder volume to it.
I found that if I use the blockdevicemapping parameter in my server
properties, then it will took a very long time to detach the volume when
I delete the stack; without blockdevicemapping, the volume will be
immediately detached.

And I don't see any difference from the VM's perspective, whether I use
blockdevicemapping or not, the volume will be attached the my VM as 'vdb'.

I am curious about what does this blockdevicemapping mean and what
difference does nova behave under the hood.

Here's my heat template:

heattemplateversion: 2013-05-23

description: |
jfdkfjdk System
outputs:
instanceip:
description: the ip address
value: {get
attr: [Myserver, networks, OAM120, 0]}
parameters:
image_name:
label: Image
type: string
default: MCcloudinit

flavorname:
label: fff
type: string
default: 4x8x160
prinet1:
label: Netffd
type: string
default: OAM120
resources:
Myserver:
metadata:
mytest
id: tzhou002isgreat
type: OS::Nova::Server
properties:
availabilityzone: nova
block
devicemapping: [
{volume
id: {getresource: disk1},
delete
ontermination: 'true',
device
name: vdb}]
flavor: { getparam: flavorname }
image: { getparam: imagename }
configdrive: 'True'
name:
str
replace:
params:
$systemname: {getparam: "OS::stackname"}
template: $system
name-0-0-1
networks:
- network: {getparam: prinet1}
security
groups: [default]
disk1:
type: OS::Cinder::Volume
properties:
availabilityzone: nova
name: server1
disk1
size: '1'
diskattach:
type: OS::Cinder::VolumeAttachment
properties:
instance
uuid: {getresource: Myserver}
volume
id: {get_resource: disk1}

mymultipartmime:
properties:
parts:
- config: {getresource: 'Myserver'}
- config: {get
resource: 'disk1'}
type: OS::Heat::MultipartMime


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] No IP addresses through DHCP because DHCP port down?

Hello

The instances I create doesn't get an IP address. I can see the request on
the neutron node but the DHCP server is not answering.

Neutron successfully creates ports for the instances, the ports are up and
working as expected.

The dhcp port is also created but with the status: DOWN and binding:viftype
is binding
failed.

Please see here: http://pastebin.com/E1Xx1pNw

How could it be binding_faild when the port is already created.

What is also strange in a different Icehouse installation the tag of the
dhcp port is "1" in this one its "4095".

Thanks for any help!

Christoph


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

You can check what error occurred in your Neutron log file.
This link may help you: https://bugs.launchpad.net/neutron/+bug/1303998
And I think tag is only for vlan tag, and it's not related.

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
--------------

2014-08-26 20:17 GMT+08:00 Chris contact@progbau.de:

Hello

The instances I create doesn’t get an IP address. I can see the request on
the neutron node but the DHCP server is not answering.

Neutron successfully creates ports for the instances, the ports are up and
working as expected.

The dhcp port is also created but with the status: DOWN and
binding:viftype is bindingfailed.

Please see here: http://pastebin.com/E1Xx1pNw

How could it be binding_faild when the port is already created.

What is also strange in a different Icehouse installation the tag of the
dhcp port is “1” in this one its “4095”.

Thanks for any help!

Christoph


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] [neutron] Network node can't see external network

Hi Remo,

Thanks, that really helped.

David

On 22/08/2014 16:51, Remo Mattei wrote:

Your br-ex does not have an ip your eth2 should not have the ip which
is correct.

Try that.

Remo

Inviato da IPad ()

Il giorno Aug 22, 2014, alle ore 5:05, David Pintor
hiya@davidpint.org ha scritto:

Hi,

I have followed the Icehouse doc to install a 3 node environment in
CentOS:
http://docs.openstack.org/icehouse/install-guide/install/yum/content/basics-neutron-networking-network-node.html

In my network node, my external NIC is configured witout IP as per
the documentation.

[root@network ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
UUID=4a1e4bc2-dac3-4c0a-985a-5a4a2203196e
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
HWADDR=00:0c:29:8d:cc:0a

However I cannot ping the external network (.80 is the virtual
router).

[root@network ~]# ping 192.168.50.80
PING 192.168.50.80 (192.168.50.80) 56(84) bytes of data.
From 10.0.0.21 icmpseq=1 Destination Host Unreachable
From 10.0.0.21 icmp
seq=2 Destination Host Unreachable
From 10.0.0.21 icmp_seq=3 Destination Host Unreachable

This is the info of my environment:

[root@network ~]# ip a
1: lo: <LOOPBACK,UP,LOWERUP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid
lft forever preferredlft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER
UP> mtu 1500 qdisc pfifofast
state UP qlen 1000
link/ether 00:0c:29:8d:cc:f6 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.21/24 brd 10.0.0.255 scope global eth0
inet6 fe80::20c:29ff:fe8d:ccf6/64 scope link
valid
lft forever preferredlft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER
UP> mtu 1500 qdisc pfifofast
state UP qlen 1000
link/ether 00:0c:29:8d:cc:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.21/24 brd 10.0.1.255 scope global eth1
inet6 fe80::20c:29ff:fe8d:cc00/64 scope link
valid
lft forever preferredlft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER
UP> mtu 1500 qdisc pfifofast
state UP qlen 1000
link/ether 00:0c:29:8d:cc:0a brd ff:ff:ff:ff:ff:ff
inet6 fe80::20c:29ff:fe8d:cc0a/64 scope link
valid
lft forever preferredlft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 9e:d9:ee:cf:18:ff brd ff:ff:ff:ff:ff:ff
6: br-ex: <BROADCAST,UP,LOWER
UP> mtu 1500 qdisc noqueue state
UNKNOWN
link/ether 00:0c:29:8d:cc:0a brd ff:ff:ff:ff:ff:ff
inet6 fe80::80c0:34ff:fea7:ab5e/64 scope link
validlft forever preferredlft forever
10: qr-d0661ff1-a9: <BROADCAST,UP,LOWERUP> mtu 1500 qdisc noqueue
state UNKNOWN
link/ether 2a:6d:c7:81:de:8e brd ff:ff:ff:ff:ff:ff
inet6 fe80::286d:c7ff:fe81:de8e/64 scope link
valid
lft forever preferredlft forever
11: br-int: <BROADCAST,UP,LOWER
UP> mtu 1500 qdisc noqueue state
UNKNOWN
link/ether 6a:aa:36:da:13:45 brd ff:ff:ff:ff:ff:ff
inet6 fe80::c058:39ff:fe4c:20f6/64 scope link
validlft forever preferredlft forever
13: br-tun: <BROADCAST,UP,LOWERUP> mtu 1500 qdisc noqueue state
UNKNOWN
link/ether 56:74:ff:da:74:43 brd ff:ff:ff:ff:ff:ff
inet6 fe80::9802:72ff:fed0:f05a/64 scope link
valid
lft forever preferred_lft forever

[root@network ~]# neutron net-list

+--------------------------------------+----------+-------------------------------------------------------+
| id | name | subnets
|

+--------------------------------------+----------+-------------------------------------------------------+
| 2abc487a-09ad-4c5c-bb6b-a98a5a255ad4 | ext-net |
bca9b325-617f-4a0c-878d-44f384056c1c 192.168.50.0/24 |
| 89954a51-5296-4743-97ac-19ddeb7010f4 | demo-net |
9760500a-1183-4e12-a975-ff6cbdafd27b 192.168.100.0/24 |

+--------------------------------------+----------+-------------------------------------------------------+

[root@network ~]# neutron subnet-list

+--------------------------------------+-------------+------------------+------------------------------------------------------+
| id | name | cidr
| allocation_pools |

+--------------------------------------+-------------+------------------+------------------------------------------------------+
| 9760500a-1183-4e12-a975-ff6cbdafd27b | demo-subnet |
192.168.100.0/24 | {"start": "192.168.100.2", "end":
"192.168.100.254"} |
| bca9b325-617f-4a0c-878d-44f384056c1c | ext-subnet |
192.168.50.0/24 | {"start": "192.168.50.80", "end": "192.168.50.89"}
|

+--------------------------------------+-------------+------------------+------------------------------------------------------+

[root@network ~]# neutron router-list

+--------------------------------------+-------------+-----------------------------------------------------------------------------+
| id | name |
externalgatewayinfo
|

+--------------------------------------+-------------+-----------------------------------------------------------------------------+
| 0786427e-d4c0-403a-a2cd-0182bc3bee1c | demo-router |
{"networkid": "2abc487a-09ad-4c5c-bb6b-a98a5a255ad4", "enablesnat":
true} |

+--------------------------------------+-------------+-----------------------------------------------------------------------------+

[root@network ~]# ovs-vsctl show
537302fd-99cc-45ff-b470-2c924daf806e
Bridge br-ex
Port "eth2"
Interface "eth2"
Port "qg-783ec99d-ba"
Interface "qg-783ec99d-ba"
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port "tap7ba88075-80"
tag: 1
Interface "tap7ba88075-80"
type: internal
Port "qr-b13b57db-17"
tag: 1
Interface "qr-b13b57db-17"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qr-d0661ff1-a9"
tag: 4095
Interface "qr-d0661ff1-a9"
type: internal
Port br-int
Interface br-int
type: internal
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port "gre-0a00011f"
Interface "gre-0a00011f"
type: gre
options: {inkey=flow, localip="10.0.1.21",
outkey=flow, remoteip="10.0.1.31"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "1.11.0"

Any hints would be much appreciated!

Cheers,

David


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

!DSPAM:1,53f732d0197288080568482!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

After sorting this out, I found out that my VMs weren't able to get an
IP via DHCP.

After a lot of troubleshooting, I fixed it by enabling IP forwarding
(net.ipv4.ip_forward=1) in the compute node (which is actually not
mentioned in the documentation:
http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html)

I'm not quite sure whether this has been omitted by mistake or whether
my configuration could be wrong somewhere else.

Any thoughts? Anyone out there with a similar 3-node configuration?

Thanks,

David

On 22/08/2014 16:51, Remo Mattei wrote:
Your br-ex does not have an ip your eth2 should not have the ip which
is correct.

Try that.

Remo

Inviato da IPad ()

Il giorno Aug 22, 2014, alle ore 5:05, David Pintor
hiya@davidpint.org ha scritto:

Hi,

I have followed the Icehouse doc to install a 3 node environment in
CentOS:
http://docs.openstack.org/icehouse/install-guide/install/yum/content/basics-neutron-networking-network-node.html

In my network node, my external NIC is configured witout IP as per
the documentation.

[root@network ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
UUID=4a1e4bc2-dac3-4c0a-985a-5a4a2203196e
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
HWADDR=00:0c:29:8d:cc:0a

However I cannot ping the external network (.80 is the virtual
router).

[root@network ~]# ping 192.168.50.80
PING 192.168.50.80 (192.168.50.80) 56(84) bytes of data.
From 10.0.0.21 icmpseq=1 Destination Host Unreachable
From 10.0.0.21 icmp
seq=2 Destination Host Unreachable
From 10.0.0.21 icmp_seq=3 Destination Host Unreachable

This is the info of my environment:

[root@network ~]# ip a
1: lo: <LOOPBACK,UP,LOWERUP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid
lft forever preferredlft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER
UP> mtu 1500 qdisc pfifofast
state UP qlen 1000
link/ether 00:0c:29:8d:cc:f6 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.21/24 brd 10.0.0.255 scope global eth0
inet6 fe80::20c:29ff:fe8d:ccf6/64 scope link
valid
lft forever preferredlft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER
UP> mtu 1500 qdisc pfifofast
state UP qlen 1000
link/ether 00:0c:29:8d:cc:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.21/24 brd 10.0.1.255 scope global eth1
inet6 fe80::20c:29ff:fe8d:cc00/64 scope link
valid
lft forever preferredlft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER
UP> mtu 1500 qdisc pfifofast
state UP qlen 1000
link/ether 00:0c:29:8d:cc:0a brd ff:ff:ff:ff:ff:ff
inet6 fe80::20c:29ff:fe8d:cc0a/64 scope link
valid
lft forever preferredlft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 9e:d9:ee:cf:18:ff brd ff:ff:ff:ff:ff:ff
6: br-ex: <BROADCAST,UP,LOWER
UP> mtu 1500 qdisc noqueue state
UNKNOWN
link/ether 00:0c:29:8d:cc:0a brd ff:ff:ff:ff:ff:ff
inet6 fe80::80c0:34ff:fea7:ab5e/64 scope link
validlft forever preferredlft forever
10: qr-d0661ff1-a9: <BROADCAST,UP,LOWERUP> mtu 1500 qdisc noqueue
state UNKNOWN
link/ether 2a:6d:c7:81:de:8e brd ff:ff:ff:ff:ff:ff
inet6 fe80::286d:c7ff:fe81:de8e/64 scope link
valid
lft forever preferredlft forever
11: br-int: <BROADCAST,UP,LOWER
UP> mtu 1500 qdisc noqueue state
UNKNOWN
link/ether 6a:aa:36:da:13:45 brd ff:ff:ff:ff:ff:ff
inet6 fe80::c058:39ff:fe4c:20f6/64 scope link
validlft forever preferredlft forever
13: br-tun: <BROADCAST,UP,LOWERUP> mtu 1500 qdisc noqueue state
UNKNOWN
link/ether 56:74:ff:da:74:43 brd ff:ff:ff:ff:ff:ff
inet6 fe80::9802:72ff:fed0:f05a/64 scope link
valid
lft forever preferred_lft forever

[root@network ~]# neutron net-list

+--------------------------------------+----------+-------------------------------------------------------+
| id | name | subnets
|

+--------------------------------------+----------+-------------------------------------------------------+
| 2abc487a-09ad-4c5c-bb6b-a98a5a255ad4 | ext-net |
bca9b325-617f-4a0c-878d-44f384056c1c 192.168.50.0/24 |
| 89954a51-5296-4743-97ac-19ddeb7010f4 | demo-net |
9760500a-1183-4e12-a975-ff6cbdafd27b 192.168.100.0/24 |

+--------------------------------------+----------+-------------------------------------------------------+

[root@network ~]# neutron subnet-list

+--------------------------------------+-------------+------------------+------------------------------------------------------+
| id | name | cidr
| allocation_pools |

+--------------------------------------+-------------+------------------+------------------------------------------------------+
| 9760500a-1183-4e12-a975-ff6cbdafd27b | demo-subnet |
192.168.100.0/24 | {"start": "192.168.100.2", "end":
"192.168.100.254"} |
| bca9b325-617f-4a0c-878d-44f384056c1c | ext-subnet |
192.168.50.0/24 | {"start": "192.168.50.80", "end": "192.168.50.89"}
|

+--------------------------------------+-------------+------------------+------------------------------------------------------+

[root@network ~]# neutron router-list

+--------------------------------------+-------------+-----------------------------------------------------------------------------+
| id | name |
externalgatewayinfo
|

+--------------------------------------+-------------+-----------------------------------------------------------------------------+
| 0786427e-d4c0-403a-a2cd-0182bc3bee1c | demo-router |
{"networkid": "2abc487a-09ad-4c5c-bb6b-a98a5a255ad4", "enablesnat":
true} |

+--------------------------------------+-------------+-----------------------------------------------------------------------------+

[root@network ~]# ovs-vsctl show
537302fd-99cc-45ff-b470-2c924daf806e
Bridge br-ex
Port "eth2"
Interface "eth2"
Port "qg-783ec99d-ba"
Interface "qg-783ec99d-ba"
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port "tap7ba88075-80"
tag: 1
Interface "tap7ba88075-80"
type: internal
Port "qr-b13b57db-17"
tag: 1
Interface "qr-b13b57db-17"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qr-d0661ff1-a9"
tag: 4095
Interface "qr-d0661ff1-a9"
type: internal
Port br-int
Interface br-int
type: internal
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port "gre-0a00011f"
Interface "gre-0a00011f"
type: gre
options: {inkey=flow, localip="10.0.1.21",
outkey=flow, remoteip="10.0.1.31"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "1.11.0"

Any hints would be much appreciated!

Cheers,

David


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

!DSPAM:1,53f732d0197288080568482!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Neutron metadata slow

hi guys,
I have a problem.
when I create new tenant-router per project,my ubuntu-instance fetch
metadata very slow.I don't see any error about metadata in log service.

Any hints would be much appreciated!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 27/08/14 03:23, Nhan Cao wrote:
hi guys,
I have a problem.
when I create new tenant-router per project,my ubuntu-instance fetch
metadata very slow.I don't see any error about metadata in log service.

Any hints would be much appreciated!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Are you seeing a drop in speed after an upgrade? I'm wondering if you're
also seeing https://bugs.launchpad.net/nova/+bug/1361357


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Instance cannot get any ip address

Hi all,

I was working on the external network accessing configuration. But I find
that after I created an instance, neither internal ip address nor external
ip address could be displayed in the response of "ifconfig" command.

However, on Dashboard, I can see both of these 2 ip addresses have been
allocated to that instance successfully.

Is there anyone knows what's going on?

I do appreciate anyone's help.

Thanks very much!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

What is the output of ifconfig? Are you using Neutron and ip netns?

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
--------------

2014-08-27 13:48 GMT+08:00 Xianyi Ye yexianyi@sina.com:

Hi all,

I was working on the external network accessing configuration. But I find
that after I created an instance, neither internal ip address nor external
ip address could be displayed in the response of “ifconfig” command.

However, on Dashboard, I can see both of these 2 ip addresses have been
allocated to that instance successfully.

Is there anyone knows what’s going on?

I do appreciate anyone’s help.

Thanks very much!


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] q-dhcp error in installation with devstack

Hi Every one,

Please guide me what I am doing wrong. We are using devstack for Openstack
installation. However, the script fails due to qdhcp. Following is our
localrc file and error log. Please guide.

[[local|localrc]]
SERVICETOKEN=azertytoken
ADMIN
PASSWORD=yespassword
MYSQLPASSWORD=stackdb
RABBIT
PASSWORD=stackqueue
SERVICEPASSWORD=$ADMINPASSWORD
disableservice n-net
enable
service q-svc
enableservice q-agt
enable
service q-dhcp
enableservice q-l3
enable
service q-meta
enableservice tempest
enable
service q-lbaas
enableservice neutron
LOGFILE=$DEST/logs/stack.sh.log
SWIFT
REPLICAS=1
SWIFTDATADIR=$DEST/data

Error log:

2014-08-27 03:33:47.619 | ++ euca-bundle-image -r x8664 -i
/home/stack/devstack/files/images/cirros-0.3.2-x86
64-
uec/cirros-0.3.2-x8664-blank.img -d /home/stack/devstack/files/
images/s3-materials/cirros-0.3.2
2014-08-27 03:33:48.094 | Checking image
2014-08-27 03:33:48.094 | Encrypting image
2014-08-27 03:33:48.094 | Splitting image...
2014-08-27 03:33:48.094 | Part: cirros-0.3.2-x86
64-blank.img.part.00
2014-08-27 03:33:48.094 | Generating manifest /home/stack/devstack/files/
images/s3-materials/cirros-0.3.2/cirros-0.3.2-x8664-blank.img.manifest.xml
2014-08-27 03:33:48.099 | ++ [[ stack == \u\n\s\t\a\c\k ]]
2014-08-27 03:33:48.099 | ++ [[ stack == \c\l\e\a\n ]]
2014-08-27 03:33:48.100 | + merge
configgroup /home/stack/devstack/local.conf
post-extra
2014-08-27 03:33:48.100 | + local localfile=/home/stack/devstack/local.conf
2014-08-27 03:33:48.100 | + shift
2014-08-27 03:33:48.100 | + local matchgroups=post-extra
2014-08-27 03:33:48.100 | + [[ -r /home/stack/devstack/local.conf ]]
2014-08-27 03:33:48.100 | + local configfile group
2014-08-27 03:33:48.100 | + for group in '$matchgroups'
2014-08-27 03:33:48.102 | ++ get
metasectionfiles
/home/stack/devstack/local.conf post-extra
2014-08-27 03:33:48.102 | ++ local file=/home/stack/devstack/local.conf
2014-08-27 03:33:48.102 | ++ local matchgroup=post-extra
2014-08-27 03:33:48.102 | ++ [[ -r /home/stack/devstack/local.conf ]]
2014-08-27 03:33:48.102 | ++ awk -v matchgroup=post-extra '
2014-08-27 03:33:48.102 | /^[[.+\|.*]]/ {
2014-08-27 03:33:48.102 | gsub("[][]", "", $1);
2014-08-27 03:33:48.102 | split($1, a, "|");
2014-08-27 03:33:48.102 | if (a[1] == matchgroup)
2014-08-27 03:33:48.102 | print a[2]
2014-08-27 03:33:48.102 | }
2014-08-27 03:33:48.102 | ' /home/stack/devstack/local.conf
2014-08-27 03:33:48.107 | + [[ -x /home/stack/devstack/local.sh ]]
2014-08-27 03:33:48.108 | + service_check
2014-08-27 03:33:48.108 | + local service
2014-08-27 03:33:48.108 | + local failures
2014-08-27 03:33:48.108 | + SCREEN_NAME=stack
2014-08-27 03:33:48.108 | + SERVICE_DIR=/opt/stack/status
2014-08-27 03:33:48.108 | + [[ ! -d /opt/stack/status/stack ]]
2014-08-27 03:33:48.110 | ++ ls /opt/stack/status/stack/q-dhcp.failure
2014-08-27 03:33:48.115 | + failures=/opt/stack/status/stack/q-dhcp.failure
2014-08-27 03:33:48.115 | + for service in '$failures'
2014-08-27 03:33:48.118 | ++ basename /opt/stack/status/stack/q-dhcp.failure
2014-08-27 03:33:48.120 | + service=q-dhcp.failure
2014-08-27 03:33:48.120 | + service=q-dhcp
2014-08-27 03:33:48.120 | + echo 'Error: Service q-dhcp is not running'
2014-08-27 03:33:48.120 | Error: Service q-dhcp is not running
2014-08-27 03:33:48.120 | + '[' -n /opt/stack/status/stack/q-dhcp.failure
']'
2014-08-27 03:33:48.120 | + die 1316 'More details about the above errors
can be found with screen, with ./rejoin-stack.sh'
2014-08-27 03:33:48.120 | + local exitcode=0
2014-08-27 03:33:48.120 | [Call Trace]
2014-08-27 03:33:48.120 | ./stack.sh:1379:service_check
2014-08-27 03:33:48.120 | /home/stack/devstack/functions-common:1316:die
2014-08-27 03:33:48.128 | [ERROR] /home/stack/devstack/functions-common:1316
More details about the above errors can be found with screen, with
./rejoin-stack.sh
2014-08-27 03:33:49.138 | Error on exit

Thanks


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] iops limiting with OpenStack Nova using Ceph/Network Storage

Hey All,

Is it possible to setup a iops/bytesps limitation within nova using libvirt
methods? I've found the following links but cant get it to work with my
environment;

http://ceph.com/planet/openstack-ceph-rbd-and-qos/
https://wiki.openstack.org/wiki/InstanceResourceQuota

I see in the commit code that it specifically mentions file and block with
no network in the code;

    tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
        'disk_write_bytes_sec', 'disk_write_iops_sec',
        'disk_total_bytes_sec', 'disk_total_iops_sec']
    # Note(yaguang): Currently, the only tuning available is Block I/O
    # throttling for qemu.
    if self.source_type in ['file', 'block']:
        for key, value in extra_specs.iteritems():
            scope = key.split(':')
            if len(scope) > 1 and scope[0] == 'quota':
                if scope[1] in tune_items:
                    setattr(info, scope[1], value)
    return info

Is it possible to limit or establish QoS rules for network storage in nova
currently or only in cinder? My source protocol is rbd, qemu driver and raw
disk type.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Yes, it's possible just like a volume from Cinder. But it still exists
some works todo. We need to pass iops/bw throttle value in via config
or flavor metadata. If you want to get it as quick as possible, you
can add some hack codes like this:

    tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
        'disk_write_bytes_sec', 'disk_write_iops_sec',
        'disk_total_bytes_sec', 'disk_total_iops_sec']
    # Note(yaguang): Currently, the only tuning available is Block I/O
    # throttling for qemu.
    if self.source_type in ['file', 'block']:
        for key, value in extra_specs.iteritems():
            scope = key.split(':')
            if len(scope) > 1 and scope[0] == 'quota':
                if scope[1] in tune_items:
                    setattr(info, scope[1], value)

+ if not getattr(info, 'disktotalbytessec'):
+ setattr(info, 'disk
totalbytessec',
CONF.libvirt.imagesdefaultbwsecond)
+ if not getattr(info, 'disk
totaliopssec'):
+ setattr(info, 'disktotaliopssec',
CONF.libvirt.images
defaultiopssecond)
return info

On Wed, Aug 27, 2014 at 4:32 PM, Tyler Wilson kupo@linuxdigital.net wrote:
Hey All,

Is it possible to setup a iops/bytesps limitation within nova using libvirt
methods? I've found the following links but cant get it to work with my
environment;

http://ceph.com/planet/openstack-ceph-rbd-and-qos/
https://wiki.openstack.org/wiki/InstanceResourceQuota

I see in the commit code that it specifically mentions file and block with
no network in the code;

    tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
        'disk_write_bytes_sec', 'disk_write_iops_sec',
        'disk_total_bytes_sec', 'disk_total_iops_sec']
    # Note(yaguang): Currently, the only tuning available is Block I/O
    # throttling for qemu.
    if self.source_type in ['file', 'block']:
        for key, value in extra_specs.iteritems():
            scope = key.split(':')
            if len(scope) > 1 and scope[0] == 'quota':
                if scope[1] in tune_items:
                    setattr(info, scope[1], value)
    return info

Is it possible to limit or establish QoS rules for network storage in nova
currently or only in cinder? My source protocol is rbd, qemu driver and raw
disk type.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Best Regards,

Wheat

[Openstack] Why security guide advise against uwsgi for deploying horizon with nginx?

HI all,

I'm trying to deploy horizon with nginx, and to my surprise, the security
guide advice against uwsgi, which is the WSGI server of choice for all my
other WSGI apps.

In the security guide [1], it says

When using nginx, we recommend gunicorn
http://docs.gunicorn.org/en/latest/deploy.html as the wsgi host with an
appropriate number of synchronous workers. We strongly advise against
deployments using fastcgi, scgi, or uWSGI. We strongly advise against the
use of synthetic performance benchmarks when choosing a wsgi server.

Anyone know the reason behind this? Is it just personal preferences?
I see uwsgi has its own benefits beyond being permanent. It has good
documentation, easy nginx integration, is stable and is configurable. Why
it is advised against?

[1]
http://docs.openstack.org/security-guide/content/ch025_web-dashboard.html

--
YY Inc. is hiring openstack and python developers. Interested? Check
http://soa.game.yy.com/jobs.html

--
Thanks,
Yuanle


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

The security guide is written with the general public in mind. While there's nothing inherently wrong with uWSGI, it is common for people to look at synthetic performance benchmarks and make their choice based on those. Unfortunately, uWSGI has an incredibly large number of options, choices, features, and configurations for a deployer to tweak, many of which can result in bad performance or security problems. Furthermore, segfaults are pretty common in that codebase (at least with some configuration options), which is not encouraging from a security perspective.?

The conservative choice is to recommend gunicorn which is stable, has fewer features, and is generally easier to configure and deploy correctly. If you prefer uWSGI and already have experience running it, please feel free to use it with Horizon.

-Paul


From: sylecn sylecn@gmail.com
Sent: Wednesday, August 27, 2014 1:39 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Why security guide advise against uwsgi for deploying horizon with nginx?

HI all,

I'm trying to deploy horizon with nginx, and to my surprise, the security guide advice against uwsgi, which is the WSGI server of choice for all my other WSGI apps.

In the security guide [1], it says

When using nginx, we recommend gunicorn as the wsgi host with an appropriate number of synchronous workers. We strongly advise against deployments using fastcgi, scgi, or uWSGI. We strongly advise against the use of synthetic performance benchmarks when choosing a wsgi server.

Anyone know the reason behind this? Is it just personal preferences?
I see uwsgi has its own benefits beyond being permanent. It has good documentation, easy nginx integration, is stable and is configurable. Why it is advised against?

[1] http://docs.openstack.org/security-guide/content/ch025_web-dashboard.html

--
YY Inc. is hiring openstack and python developers. Interested? Check http://soa.game.yy.com/jobs.html

--
Thanks,
Yuanle


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] High Latency to VMs

I've been trying to figure this one out for a while, so I'll try and be as thorough as possible in this post but apologies if I miss anything pertinent out.

First off, I'm running a set up with one control node and 5 compute nodes, all created using the Stackgeek scripts - http://www.stackgeek.com/guides/gettingstarted.html. The first two (compute1 and compute 2) were created at the same time, compute3, 4 and 5 were added as needed later. My VMs are predominantly CentOS, while my Openstack nodes are Ubuntu 14.04.1

The symptom: irregular high latency/packet loss to VMs on all compute boxes except compute3. Mostly a pain when trying to do anything via ssh on a VM because the lag makes it difficult to do anything, but it shows itself quite nicely through pings as well:
--- 10.0.102.47 ping statistics ---
111 packets transmitted, 103 received, 7% packet loss, time 110024ms
rtt min/avg/max/mdev = 0.096/367.220/5593.100/1146.920 ms, pipe 6

I have tested these pings:
VM to itself (via its external IP) seems fine
VM to another VM is not fine
Hosting compute node to VM is not fine
My PC to VM is not fine (however the other way round works fine)

Top on a (32 core) compute node with laggy VMs:
top - 12:09:20 up 33 days, 21:35, 1 user, load average: 2.37, 4.95, 6.23
Tasks: 431 total, 2 running, 429 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.6 us, 3.4 sy, 0.0 ni, 96.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 65928256 total, 44210348 used, 21717908 free, 341172 buffers
KiB Swap: 7812092 total, 1887864 used, 5924228 free. 7134740 cached Mem

And for comparison, on the one compute node that doesn't seem to be suffering from this:
top - 12:12:20 up 33 days, 21:38, 1 user, load average: 0.28, 0.18, 0.15
Tasks: 399 total, 3 running, 396 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.3 us, 0.1 sy, 0.0 ni, 98.9 id, 0.6 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 65928256 total, 49986064 used, 15942192 free, 335788 buffers
KiB Swap: 7812092 total, 919392 used, 6892700 free. 39272312 cached Mem

Top on a laggy VM:
top - 11:02:53 up 27 days, 33 min, 3 users, load average: 0.00, 0.00, 0.00
Tasks: 91 total, 1 running, 90 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 99.5%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1020400k total, 881004k used, 139396k free, 162632k buffers
Swap: 1835000k total, 14984k used, 1820016k free, 220644k cached

http://imgur.com/blULjDa shows the hypervisor panel of Horizon. As you can see, Compute 3 has fewer resources used, but none of the compute nodes should be anywhere near overloaded from what I can tell.

Any ideas? Let me know if I'm missing anything obvious that would help with figuring this out!

Hannah


Radiant Worlds Limited is registered in England (company no: 07822337). This message is intended solely for the addressee and may contain confidential information. If you have received this message in error please send it back to us and immediately and permanently delete it from your system. Do not use, copy or disclose the information contained in this message or in any attachment. Please also note that transmission cannot be guaranteed to be secure or error-free.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

We are with the same issue here, and already try some solutions that didn't
work at all. Did you solved this problem?

Thank you,
Andre Aranha

On 27 August 2014 at 08:17, Hannah Fordham hfordham@radiantworlds.com
wrote:

I’ve been trying to figure this one out for a while, so I’ll try and be as
thorough as possible in this post but apologies if I miss anything
pertinent out.

First off, I’m running a set up with one control node and 5 compute nodes,
all created using the Stackgeek scripts -
http://www.stackgeek.com/guides/gettingstarted.html. The first two
(compute1 and compute 2) were created at the same time, compute3, 4 and 5
were added as needed later. My VMs are predominantly CentOS, while my
Openstack nodes are Ubuntu 14.04.1

*The symptom: *irregular high latency/packet loss to VMs on all compute
boxes except compute3. Mostly a pain when trying to do anything via ssh on
a VM because the lag makes it difficult to do anything, but it shows itself
quite nicely through pings as well:

--- 10.0.102.47 ping statistics ---

111 packets transmitted, 103 received, 7% packet loss, time 110024ms

rtt min/avg/max/mdev = 0.096/367.220/5593.100/1146.920 ms, pipe 6

I have tested these pings:

VM to itself (via its external IP) seems fine

VM to another VM is not fine

Hosting compute node to VM is not fine

My PC to VM is not fine (however the other way round works fine)

Top on a (32 core) compute node with laggy VMs:

top - 12:09:20 up 33 days, 21:35, 1 user, load average: 2.37, 4.95, 6.23

Tasks: 431 total, 2 running, 429 sleeping, 0 stopped, 0 zombie

%Cpu(s): 0.6 us, 3.4 sy, 0.0 ni, 96.0 id, 0.0 wa, 0.0 hi, 0.0 si,
0.0 st

KiB Mem: 65928256 total, 44210348 used, 21717908 free, 341172 buffers

KiB Swap: 7812092 total, 1887864 used, 5924228 free. 7134740 cached Mem

And for comparison, on the one compute node that doesn’t seem to be
suffering from this:

top - 12:12:20 up 33 days, 21:38, 1 user, load average: 0.28, 0.18, 0.15

Tasks: 399 total, 3 running, 396 sleeping, 0 stopped, 0 zombie

%Cpu(s): 0.3 us, 0.1 sy, 0.0 ni, 98.9 id, 0.6 wa, 0.0 hi, 0.0 si,
0.0 st

KiB Mem: 65928256 total, 49986064 used, 15942192 free, 335788 buffers

KiB Swap: 7812092 total, 919392 used, 6892700 free. 39272312 cached Mem

Top on a laggy VM:

top - 11:02:53 up 27 days, 33 min, 3 users, load average: 0.00, 0.00,
0.00

Tasks: 91 total, 1 running, 90 sleeping, 0 stopped, 0 zombie

Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 99.5%id, 0.1%wa, 0.0%hi, 0.0%si,
0.0%st

Mem: 1020400k total, 881004k used, 139396k free, 162632k buffers

Swap: 1835000k total, 14984k used, 1820016k free, 220644k cached

http://imgur.com/blULjDa shows the hypervisor panel of Horizon. As you
can see, Compute 3 has fewer resources used, but none of the compute nodes
should be anywhere near overloaded from what I can tell.

Any ideas? Let me know if I’m missing anything obvious that would help
with figuring this out!

Hannah


Radiant Worlds Limited is registered in England (company no: 07822337).
This message is intended solely for the addressee and may contain
confidential information. If you have received this message in error please
send it back to us and immediately and permanently delete it from your
system. Do not use, copy or disclose the information contained in this
message or in any attachment. Please also note that transmission cannot be
guaranteed to be secure or error-free.


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Swift questions

Hello,
Some questions on new and old features of Swift. Any help would be
great:) Some are very basic, sorry!

  1. Does Swift write two copies and then return back to the client in
    the 3 replica case, with third in the background?

  2. This again is a stupid question, but eventually consistent for an
    object is a bit confusing, unless it is updated. If it is created, it
    is either there or not and you cannot update the data within the
    object. Maybe a POST can change the metadata? Or the container listing
    shows its there but the actual object never got there? Those are the
    only cases I can think of.

  3. Once an object has been written, when and how is the container
    listing, number of bytes, account listing (if new container created)
    etc updated? Is there something done in the path of the PUT to
    indicate this object belongs to a particular container and the number
    of bytes etc is done in the background? A little clarification would
    help:)

  4. For the global clusters, is the object ring across regions or is it
    the same with containers and accounts also?

  5. For containers in global clusters, if a client queries the
    container metadata from another site, is there a chance of it getting
    the old metadata? With respect to the object itself, the eventually
    consistent part is a bit confusing for me:)

MW


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Marcus,

See answers below. Feel free to ask follow-ups, others may have more to add as well.

Thx
Paul

-----Original Message-----
From: Marcus White [mailto:roastedseaweed.k@gmail.com]
Sent: Wednesday, August 27, 2014 5:04 AM
To: openstack
Subject: [Openstack] Swift questions

Hello,
Some questions on new and old features of Swift. Any help would be
great:) Some are very basic, sorry!

  1. Does Swift write two copies and then return back to the client in the 3 replica case, with third in the background?

PL> Depends on the number of replicas, the formula for what we call a quorum is n/2 + 1 which is the number of success responses we get from the back end storage nodes before telling the client that all is good. So, yes, with 3 replicas you need 2 good responses before returning OK.

  1. This again is a stupid question, but eventually consistent for an object is a bit confusing, unless it is updated. If it is created, it is either there or not and you cannot update the data within the object. Maybe a POST can change the metadata? Or the container listing shows its there but the actual object never got there? Those are the only cases I can think of.

PL> No, it's a good question because its asked a lot. The most common scenario that we talk about for eventually consistent is the consistency between the existence of an object and its presence in the container listing so your thinking is pretty close. When an object PUT is complete on a storage node (fully committed to disk), that node will then send a message to the appropriate container server to update the listing. It will attempt to do this synchronously but if it can't, the update may be delayed w/o any indication to the client. This is by design and means that it's possible to get a successful PUT, be able to GET the object w/o any problem however it may not yet show up in the container listing. There are other scenarios that demonstrate the eventually consistent nature of Swift, this is just a common and easy to explain one.

  1. Once an object has been written, when and how is the container listing, number of bytes, account listing (if new container created) etc updated? Is there something done in the path of the PUT to indicate this object belongs to a particular container and the number of bytes etc is done in the background? A little clarification would
    help:)

PL> Covered as part of last question.

  1. For the global clusters, is the object ring across regions or is it the same with containers and accounts also?

PL> Check out the SwiftStack blog if you haven't already at https://swiftstack.com/blog/2013/07/02/swift-1-9-0-release/ and there's also some other stuff (including a demo from the last summit) that you can find googling around a bit too. The 'Region Tier' element described in the blog addresses the makeup of a ring so can be applied to both container and account rings also - I personally didn't work on this feature so will leave it to one of the other guys to comment more in this area.

  1. For containers in global clusters, if a client queries the container metadata from another site, is there a chance of it getting the old metadata? With respect to the object itself, the eventually consistent part is a bit confusing for me:)

PL> There's always a chance of getting old "something" whether its metadata or data, that's part of eventually consistent. In the face of an outage (the P in the CAP theorem) Swift will always favor availability which may mean older data or older metadata (object or container listing) depending on the specific scenario. If deployed correctly I don't believe use of global clusters increases the odds of this happening though (again will count on someone else to say more) and its worth emphasizing the getting "old stuff" is in the face of some sort of failure (or big network congestion) so you shouldn't think of eventually consistent as being a system where you "get whatever you get". You'll get the latest greatest available information.

[Openstack] [tripleo-image-elements][diskimage-builder] Will tripleo-image-elements support Tuskar-UI?

Hello, everyone:
I'm a user of TripleO, and I wonder if tripleo-image-elements will
support Tuskar-UI ?
Tripleo-image-elements supports Tuskar now, when will
tripleo-image-elements support Tuskar-UI ? Or should we write Tuskar-UI
element ourselves?

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--------------


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Horizon Dashboard: instance's Uptime continues to increment after the instance is shutdown

Hi,

In Horizon Dashboard, Project -> Compute -> Instances, after an instance is shutdown, it’s Uptime continues to increment. Is this a correct behavior?

Thanks,
Danny


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 08/27/2014 09:06 AM, Danny Choi (dannchoi) wrote:
Hi,

In Horizon Dashboard, Project -> Compute -> Instances, after an instance
is shutdown, it’s Uptime continues to increment. Is this a correct
behavior?

I'm pretty sure this is expected behaviour. I agree the terminology is
confusing though. It's more "time since server creation" rather than
"uptime".

I suspect the origin of the term is that typically with a public cloud
you get billed for the whole time your server is defined. In a typical
"cloud" environment you would rarely shut down a server--it's either
running or you destroy it completely and rebuild a new one if needed in
the future.

Chris

[Openstack] Onboarding interested contributors to Openstack

I have an idea and hoping to get some thoughts on whether it's a good or
bad idea.

Something I've been hearing the last couple summits is folks who have seen
the light and want to get involved with the project is that they don't know
where to go, where to start since the amount of information covered in the
docs and projects they cover can be pretty overwhelming. Especially for the
first time.

One of the ways I've seen complexity successfully navigated for newcomers
is through a sort of guide/mentor program where someone hears what the
newcomer determines the interest level, what they want to do, advises them
where the greatest need currently is in that area, makes the necessary
introductions and basically guides them through the process until they are
in a project. Is there plans that accommodate interested people to get
onboard with a sort of guided assist or mentor? If not per se, would there
be any interest in it? Maybe not as its own program but as a Community
sub-program that amounts to a welcoming committee.

Is this something that resonates with anyone or is it already being done
somewhere?

Sort of like a "Start Here" for the new folks.

Adam Lawson
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I think OpenStack Upstream Training [1] is pretty much exactly what you’re describing. Make sure to click on the More Details link there.

There will be one at the Paris Summit. [2]

Cheers,
Everett

[1] https://wiki.openstack.org/wiki/OpenStack_Upstream_Training
[2] http://www.openstack.org/blog/2014/08/openstack-upstream-training-in-paris/

On Aug 27, 2014, at 1:11 PM, Adam Lawson alawson@aqorn.com wrote:

I have an idea and hoping to get some thoughts on whether it's a good or bad idea.

Something I've been hearing the last couple summits is folks who have seen the light and want to get involved with the project is that they don't know where to go, where to start since the amount of information covered in the docs and projects they cover can be pretty overwhelming. Especially for the first time.

One of the ways I've seen complexity successfully navigated for newcomers is through a sort of guide/mentor program where someone hears what the newcomer determines the interest level, what they want to do, advises them where the greatest need currently is in that area, makes the necessary introductions and basically guides them through the process until they are in a project. Is there plans that accommodate interested people to get onboard with a sort of guided assist or mentor? If not per se, would there be any interest in it? Maybe not as its own program but as a Community sub-program that amounts to a welcoming committee.

Is this something that resonates with anyone or is it already being done somewhere?

Sort of like a "Start Here" for the new folks.

Adam Lawson
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
[http://www.aqorn.com/images/logo.png]


Community mailing list
Community@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/community

[Openstack] Clean ovs ports

Hello,

I'm using Openstack havana release with neutron. Neutron is with ML2 plugin
and ovs_agents on hosts. Today I found that if for some reason compute host is
for example rebooted and before restartt there was some instances on host than
after restart instances are in state "SHUTOFF" and are not configured in virsh
on compute host but nova-compute not remove ports from openvswitch br-int
bridge and I ovs agent see those ports are still configured on host.
Is it known bug maybe? Maybe someone of You know some solution of that? Thanks
in advance for any info about that.

--
Pozdrawiam,
Sławek Kapłoński
slawek@kaplonski.pl

--
Klucz GPG można pobrać ze strony:
http://kaplonski.pl/files/slawek_kaplonski.pub.key
--
My public GPG key can be downloaded from:
http://kaplonski.pl/files/slawek_kaplonski.pub.key_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Same questions here. I sometimes use ovs-vsctl command before manually.

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
--------------

2014-08-28 5:26 GMT+08:00 Sławek Kapłoński slawek@kaplonski.pl:

Hello,

I'm using Openstack havana release with neutron. Neutron is with ML2 plugin
and ovs_agents on hosts. Today I found that if for some reason compute
host is
for example rebooted and before restartt there was some instances on host
than
after restart instances are in state "SHUTOFF" and are not configured in
virsh
on compute host but nova-compute not remove ports from openvswitch br-int
bridge and I ovs agent see those ports are still configured on host.
Is it known bug maybe? Maybe someone of You know some solution of that?
Thanks
in advance for any info about that.

--
Pozdrawiam,
Sławek Kapłoński
slawek@kaplonski.pl

--
Klucz GPG można pobrać ze strony:
http://kaplonski.pl/files/slawek_kaplonski.pub.key
--
My public GPG key can be downloaded from:
http://kaplonski.pl/files/slawek_kaplonski.pub.key


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [TripeO] venvs install packages too old ?

Hi, Everyone:
When I am using TripleO to build an undercloud image, it will
install packages in /opt/stack/venvs/openstack. But SqlAlchemy version in
/opt/stack/venvs/openstack/lib is 0.7, but SqlAchemy in /user/lib/ is 0.8,
how can fix this? Why is Sqlalchemy version in venvs lower than /user/lib ?
How can control package version when building undercloud image?
Thank you for any help in advance.

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--------------


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [openstack] Documents of Notifications

Hi folks,

I'm working on a program which receives all error/warn notifications from
openstack and translate them into human readable notifications. So I
desperately need a document describing all notifications. I've found one
document on wiki (https://wiki.openstack.org/wiki/SystemUsageData) but it's
incomplete. And there's a list of nova "event_type"
(http://paste.openstack.org/show/54140/) but lack of events from
cinder/neutron/glance.

I wonder if there are documents that can cover most of the notifications.
Can anyone please kindly tell me where to look? Or maybe just tell me
"there's no such a thing, just face it!" so that I can save my time and try
some other approaches.

Thanks a lot!!
Kurt Rao


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Kurt,

https://wiki.openstack.org/wiki/SystemUsageData is the best that I'm aware of.

There's also http://docs.openstack.org/developer/glance/notifications.html for Glance.

I'd love better docs around the types, content, and work flows of the notifications.

-Theresa

-----Original Message-----
From: Rao Dingyuan [mailto:raodingyuan@chinacloud.com.cn]
Sent: Thursday, August 28, 2014 4:48 AM
To: openstack@lists.openstack.org
Subject: [Openstack] [openstack] Documents of Notifications

Hi folks,

I'm working on a program which receives all error/warn notifications from openstack and translate them into human readable notifications. So I desperately need a document describing all notifications. I've found one document on wiki (https://wiki.openstack.org/wiki/SystemUsageData) but it's incomplete. And there's a list of nova "event_type"
(http://paste.openstack.org/show/54140/) but lack of events from cinder/neutron/glance.

I wonder if there are documents that can cover most of the notifications.
Can anyone please kindly tell me where to look? Or maybe just tell me "there's no such a thing, just face it!" so that I can save my time and try some other approaches.

Thanks a lot!!
Kurt Rao


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

403 Forbidden


Property 'x' is protected


403 Forbidden


Property 'x' is protected

[Openstack] Launch of a instance failed

Hi i am deploying a devstack juno on ubuntu 14.04 server virtual machine.
After installation,when i am trying to launch a instance,its failed.
I am getting "host not found" error.
Below is part of /opt/stack/logs/screen/screen-n-cond.log

Below is ther error
2014-08-28 23:44:59.448 ERROR nova.scheduler.utils
[req-6f220296-8ec2-4e49-821d-0d69d3acc315 admin admin] [instance:
7f105394-414c-4458-b1a1-6f37d6cff87a] Error from last host:
juno-devstack-server (node juno-devstack-server): [u'Traceback (most recent
call last):\n', u' File "/opt/stack/nova/nova/compute/manager.py", line
1932, in dobuildandruninstance\n filterproperties)\n', u' File
"/opt/stack/nova/nova/compute/manager.py", line 2067, in
_build
andruninstance\n instanceuuid=instance.uuid,
reason=six.text
type(e))\n', u'RescheduledException: Build of instance
7f105394-414c-4458-b1a1-6f37d6cff87a was re-scheduled: not all arguments
converted during string formatting\n']

Regards
Nikesh


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Compute node do not support QEMU hypervisor from Juno. So, you should not deploy a compute node on VM.

Guo Jinwei

发件人: Nikesh Kumar Mahalka [mailto:nikeshmahalka@vedams.com]
发送时间: 2014年8月29日 2:22
收件人: openstack@lists.openstack.org
主题: [Openstack] Launch of a instance failed

Hi i am deploying a devstack juno on ubuntu 14.04 server virtual machine.

After installation,when i am trying to launch a instance,its failed.

I am getting "host not found" error.

Below is part of /opt/stack/logs/screen/screen-n-cond.log

Below is ther error

2014-08-28 23:44:59.448 ERROR nova.scheduler.utils [req-6f220296-8ec2-4e49-821d-0d69d3acc315 admin admin] [instance: 7f105394-414c-4458-b1a1-6f37d6cff87a] Error from last host: juno-devstack-server (node juno-devstack-server): [u'Traceback (most recent call last):\n', u' File "/opt/stack/nova/nova/compute/manager.py", line 1932, in dobuildandruninstance\n filterproperties)\n', u' File "/opt/stack/nova/nova/compute/manager.py", line 2067, in _buildandruninstance\n instanceuuid=instance.uuid, reason=six.texttype(e))\n', u'RescheduledException: Build of instance 7f105394-414c-4458-b1a1-6f37d6cff87a was re-scheduled: not all arguments converted during string formatting\n']

Regards

Nikesh


Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Networking issues in DevStack

Hi,

I’ve been trying to get DevStack working to the point that I could use it
to explore CloudFoundry (
http://docs.cloudfoundry.org/deploying/openstack/validate_openstack.html).
I keep getting stuck with networking glitches. Sometimes it works, but most
of the time it doesn’t. Hopefully you can help. This is how I’m running
DevStack:
http://software.danielwatrous.com/openstack-development-using-devstack/

When I boot my server fresh, the networking interfaces are as shown in the
first block below. The second block below that shows what they look like
after running stack.sh and starting the first guest VM. The IP address
disappears for eth0, but I don’t lose any SSH connections that are
currently live against that interface. When I log in to the instance I
can't access the outside internet. I have tried iptables changes (sudo
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE). I have tried
updating /proc/sys/net/ipv4/ip_forward. I can ping the host, the gateway
and the main IP address, including the one that supposedly no longer exists.

Any help appreciated.


**** Fresh boot Ubuntu 14.04 LTS x64 *

**** Two network ports, DHCP obtained IP addresses *

**** Behind a proxy *


watrous@watrous-helion:~$ ifconfig

eth0 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b3

      inet addr:16.85.145.3  Bcast:16.85.147.255  Mask:255.255.252.0

      inet6 addr: fe80::21e:c9ff:feb2:40b3/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:277 errors:0 dropped:0 overruns:0 frame:0

      TX packets:149 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000

      RX bytes:29446 (29.4 KB)  TX bytes:24038 (24.0 KB)

eth1 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b5

      inet addr:16.85.145.6  Bcast:16.85.147.255  Mask:255.255.252.0

      inet6 addr: fe80::21e:c9ff:feb2:40b5/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:177 errors:0 dropped:0 overruns:0 frame:0

      TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000

      RX bytes:15657 (15.6 KB)  TX bytes:1372 (1.3 KB)

lo Link encap:Local Loopback

      inet addr:127.0.0.1  Mask:255.0.0.0

      inet6 addr: ::1/128 Scope:Host

      UP LOOPBACK RUNNING  MTU:65536  Metric:1

      RX packets:128 errors:0 dropped:0 overruns:0 frame:0

      TX packets:128 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:7930 (7.9 KB)  TX bytes:7930 (7.9 KB)

virbr0 Link encap:Ethernet HWaddr 66:0f:58:21:8d:7f

      inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0

      UP BROADCAST MULTICAST  MTU:1500  Metric:1

      RX packets:0 errors:0 dropped:0 overruns:0 frame:0

      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

watrous@watrous-helion:~$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group
default qlen 1000

link/ether 00:1e:c9:b2:40:b3 brd ff:ff:ff:ff:ff:ff

inet 16.85.145.3/22 brd 16.85.147.255 scope global eth0

   valid_lft forever preferred_lft forever

inet6 fe80::21e:c9ff:feb2:40b3/64 scope link

   valid_lft forever preferred_lft forever

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group
default qlen 1000

link/ether 00:1e:c9:b2:40:b5 brd ff:ff:ff:ff:ff:ff

inet 16.85.145.6/22 brd 16.85.147.255 scope global eth1

   valid_lft forever preferred_lft forever

inet6 fe80::21e:c9ff:feb2:40b5/64 scope link

   valid_lft forever preferred_lft forever

4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default

link/ether 66:0f:58:21:8d:7f brd ff:ff:ff:ff:ff:ff

inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

   valid_lft forever preferred_lft forever

watrous@watrous-helion:~$ ip route

default via 16.85.144.1 dev eth0

16.85.144.0/22 dev eth0 proto kernel scope link src 16.85.145.3

16.85.144.0/22 dev eth1 proto kernel scope link src 16.85.145.6

192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

watrous@watrous-helion:~$ sudo iptables -L

Chain INPUT (policy ACCEPT)

target prot opt source destination

Chain FORWARD (policy ACCEPT)

target prot opt source destination

Chain OUTPUT (policy ACCEPT)

target prot opt source destination


****** After stack.sh and create a new VM ******


watrous@watrous-helion:~$ ifconfig

br100 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b3

      inet addr:10.11.12.1  Bcast:10.11.12.255  Mask:255.255.255.0

      inet6 addr: fe80::60ab:b0ff:fe26:9fd/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:887 errors:0 dropped:23 overruns:0 frame:0

      TX packets:23 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:292811 (292.8 KB)  TX bytes:2367 (2.3 KB)

eth0 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b3

      inet6 addr: fe80::21e:c9ff:feb2:40b3/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:669986 errors:0 dropped:4 overruns:0 frame:0

      TX packets:588283 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000

      RX bytes:597137467 (597.1 MB)  TX bytes:593935543 (593.9 MB)

eth1 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b5

      inet addr:16.85.145.6  Bcast:16.85.147.255  Mask:255.255.252.0

      inet6 addr: fe80::21e:c9ff:feb2:40b5/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:354520 errors:0 dropped:0 overruns:0 frame:0

      TX packets:285578 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000

      RX bytes:313844137 (313.8 MB)  TX bytes:307237034 (307.2 MB)

lo Link encap:Local Loopback

      inet addr:127.0.0.1  Mask:255.0.0.0

      inet6 addr: ::1/128 Scope:Host

      UP LOOPBACK RUNNING  MTU:65536  Metric:1

      RX packets:60309 errors:0 dropped:0 overruns:0 frame:0

      TX packets:60309 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:20401825 (20.4 MB)  TX bytes:20401825 (20.4 MB)

virbr0 Link encap:Ethernet HWaddr 66:0f:58:21:8d:7f

      inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0

      UP BROADCAST MULTICAST  MTU:1500  Metric:1

      RX packets:0 errors:0 dropped:0 overruns:0 frame:0

      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vnet0 Link encap:Ethernet HWaddr fe:16:3e:f6:f9:b8

      inet6 addr: fe80::fc16:3eff:fef6:f9b8/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:31 errors:0 dropped:0 overruns:0 frame:0

      TX packets:188 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:500

      RX bytes:3004 (3.0 KB)  TX bytes:18215 (18.2 KB)

watrous@watrous-helion:~$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet 169.254.169.254/32 scope link lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br100
state UP group default qlen 1000

link/ether 00:1e:c9:b2:40:b3 brd ff:ff:ff:ff:ff:ff

inet6 fe80::21e:c9ff:feb2:40b3/64 scope link

   valid_lft forever preferred_lft forever

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group
default qlen 1000

link/ether 00:1e:c9:b2:40:b5 brd ff:ff:ff:ff:ff:ff

inet 16.85.145.6/22 brd 16.85.147.255 scope global eth1

   valid_lft forever preferred_lft forever

inet6 fe80::21e:c9ff:feb2:40b5/64 scope link

   valid_lft forever preferred_lft forever

4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default

link/ether 66:0f:58:21:8d:7f brd ff:ff:ff:ff:ff:ff

inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

   valid_lft forever preferred_lft forever

5: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default

link/ether 00:1e:c9:b2:40:b3 brd ff:ff:ff:ff:ff:ff

inet 10.11.12.1/24 brd 10.11.12.255 scope global br100

   valid_lft forever preferred_lft forever

inet 16.85.145.3/22 brd 16.85.147.255 scope global br100

   valid_lft forever preferred_lft forever

inet6 fe80::60ab:b0ff:fe26:9fd/64 scope link

   valid_lft forever preferred_lft forever

6: vnet0: <BROADCAST,MULTICAST,UP,LOWERUP> mtu 1500 qdisc pfifofast
master br100 state UNKNOWN group default qlen 500

link/ether fe:16:3e:f6:f9:b8 brd ff:ff:ff:ff:ff:ff

inet6 fe80::fc16:3eff:fef6:f9b8/64 scope link

   valid_lft forever preferred_lft forever

watrous@watrous-helion:~$ ip route

default via 16.85.144.1 dev eth1

10.11.12.0/24 dev br100 proto kernel scope link src 10.11.12.1

16.85.144.0/22 dev eth1 proto kernel scope link src 16.85.145.6

16.85.144.0/22 dev br100 proto kernel scope link src 16.85.145.3

192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

watrous@watrous-helion:~$ sudo iptables -L

Chain INPUT (policy ACCEPT)

target prot opt source destination

nova-compute-INPUT all -- anywhere anywhere

nova-network-INPUT all -- anywhere anywhere

nova-api-INPUT all -- anywhere anywhere

Chain FORWARD (policy ACCEPT)

target prot opt source destination

nova-filter-top all -- anywhere anywhere

nova-compute-FORWARD all -- anywhere anywhere

nova-network-FORWARD all -- anywhere anywhere

nova-api-FORWARD all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT)

target prot opt source destination

nova-filter-top all -- anywhere anywhere

nova-compute-OUTPUT all -- anywhere anywhere

nova-network-OUTPUT all -- anywhere anywhere

nova-api-OUTPUT all -- anywhere anywhere

Chain nova-api-FORWARD (1 references)

target prot opt source destination

Chain nova-api-INPUT (1 references)

target prot opt source destination

ACCEPT tcp -- anywhere watrous-helion.americas.hpqcorp.net
tcp dpt:8775

Chain nova-api-OUTPUT (1 references)

target prot opt source destination

Chain nova-api-local (1 references)

target prot opt source destination

Chain nova-compute-FORWARD (1 references)

target prot opt source destination

ACCEPT all -- anywhere anywhere

ACCEPT all -- anywhere anywhere

ACCEPT udp -- 0.0.0.0 255.255.255.255 udp
spt:bootpc dpt:bootps

Chain nova-compute-INPUT (1 references)

target prot opt source destination

ACCEPT udp -- 0.0.0.0 255.255.255.255 udp
spt:bootpc dpt:bootps

Chain nova-compute-OUTPUT (1 references)

target prot opt source destination

Chain nova-compute-inst-2 (1 references)

target prot opt source destination

DROP all -- anywhere anywhere state INVALID

ACCEPT all -- anywhere anywhere state
RELATED,ESTABLISHED

nova-compute-provider all -- anywhere anywhere

ACCEPT udp -- 10.11.12.1 anywhere udp
spt:bootps dpt:bootpc

ACCEPT all -- 10.11.12.0/24 anywhere

nova-compute-sg-fallback all -- anywhere anywhere

Chain nova-compute-local (1 references)

target prot opt source destination

nova-compute-inst-2 all -- anywhere 10.11.12.3

Chain nova-compute-provider (1 references)

target prot opt source destination

Chain nova-compute-sg-fallback (1 references)

target prot opt source destination

DROP all -- anywhere anywhere

Chain nova-filter-top (2 references)

target prot opt source destination

nova-compute-local all -- anywhere anywhere

nova-network-local all -- anywhere anywhere

nova-api-local all -- anywhere anywhere

Chain nova-network-FORWARD (1 references)

target prot opt source destination

ACCEPT all -- anywhere anywhere

ACCEPT all -- anywhere anywhere

Chain nova-network-INPUT (1 references)

target prot opt source destination

ACCEPT udp -- anywhere anywhere udp dpt:bootps

ACCEPT tcp -- anywhere anywhere tcp dpt:bootps

ACCEPT udp -- anywhere anywhere udp dpt:domain

ACCEPT tcp -- anywhere anywhere tcp dpt:domain

Chain nova-network-OUTPUT (1 references)

target prot opt source destination

Chain nova-network-local (1 references)

target prot opt source destination


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I dont see eth0 ip addr disapper . And what's the ip addr of your first VM
? You are using Nova-network , not Neutron, right ?
And why are you using different route to the same network ?

watrous@watrous-helion:~$ ip route

default via 16.85.144.1 dev eth1

10.11.12.0/24 dev br100 proto kernel scope link src 10.11.12.1

16.85.144.0/22 dev eth1 proto kernel scope link
src 16.85.145.6

16.85.144.0/22 dev br100 proto kernel scope link
src 16.85.145.3

192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
--------------

2014-08-29 7:01 GMT+08:00 Daniel Watrous dwmaillist@gmail.com:

Hi,

I’ve been trying to get DevStack working to the point that I could use it
to explore CloudFoundry (
http://docs.cloudfoundry.org/deploying/openstack/validate_openstack.html).
I keep getting stuck with networking glitches. Sometimes it works, but most
of the time it doesn’t. Hopefully you can help. This is how I’m running
DevStack:
http://software.danielwatrous.com/openstack-development-using-devstack/

When I boot my server fresh, the networking interfaces are as shown in the
first block below. The second block below that shows what they look like
after running stack.sh and starting the first guest VM. The IP address
disappears for eth0, but I don’t lose any SSH connections that are
currently live against that interface. When I log in to the instance I
can't access the outside internet. I have tried iptables changes (sudo
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE). I have tried
updating /proc/sys/net/ipv4/ip_forward. I can ping the host, the gateway
and the main IP address, including the one that supposedly no longer exists.

Any help appreciated.


**** Fresh boot Ubuntu 14.04 LTS x64 *

**** Two network ports, DHCP obtained IP addresses *

**** Behind a proxy *


watrous@watrous-helion:~$ ifconfig

eth0 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b3

      inet addr:16.85.145.3  Bcast:16.85.147.255  Mask:255.255.252.0

      inet6 addr: fe80::21e:c9ff:feb2:40b3/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:277 errors:0 dropped:0 overruns:0 frame:0

      TX packets:149 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000

      RX bytes:29446 (29.4 KB)  TX bytes:24038 (24.0 KB)

eth1 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b5

      inet addr:16.85.145.6  Bcast:16.85.147.255  Mask:255.255.252.0

      inet6 addr: fe80::21e:c9ff:feb2:40b5/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:177 errors:0 dropped:0 overruns:0 frame:0

      TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000

      RX bytes:15657 (15.6 KB)  TX bytes:1372 (1.3 KB)

lo Link encap:Local Loopback

      inet addr:127.0.0.1  Mask:255.0.0.0

      inet6 addr: ::1/128 Scope:Host

      UP LOOPBACK RUNNING  MTU:65536  Metric:1

      RX packets:128 errors:0 dropped:0 overruns:0 frame:0

      TX packets:128 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:7930 (7.9 KB)  TX bytes:7930 (7.9 KB)

virbr0 Link encap:Ethernet HWaddr 66:0f:58:21:8d:7f

      inet addr:192.168.122.1  Bcast:192.168.122.255

Mask:255.255.255.0

      UP BROADCAST MULTICAST  MTU:1500  Metric:1

      RX packets:0 errors:0 dropped:0 overruns:0 frame:0

      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

watrous@watrous-helion:~$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
group default qlen 1000

link/ether 00:1e:c9:b2:40:b3 brd ff:ff:ff:ff:ff:ff

inet 16.85.145.3/22 brd 16.85.147.255 scope global eth0

   valid_lft forever preferred_lft forever

inet6 fe80::21e:c9ff:feb2:40b3/64 scope link

   valid_lft forever preferred_lft forever

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
group default qlen 1000

link/ether 00:1e:c9:b2:40:b5 brd ff:ff:ff:ff:ff:ff

inet 16.85.145.6/22 brd 16.85.147.255 scope global eth1

   valid_lft forever preferred_lft forever

inet6 fe80::21e:c9ff:feb2:40b5/64 scope link

   valid_lft forever preferred_lft forever

4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN group default

link/ether 66:0f:58:21:8d:7f brd ff:ff:ff:ff:ff:ff

inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

   valid_lft forever preferred_lft forever

watrous@watrous-helion:~$ ip route

default via 16.85.144.1 dev eth0

16.85.144.0/22 dev eth0 proto kernel scope link src 16.85.145.3

16.85.144.0/22 dev eth1 proto kernel scope link src 16.85.145.6

192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

watrous@watrous-helion:~$ sudo iptables -L

Chain INPUT (policy ACCEPT)

target prot opt source destination

Chain FORWARD (policy ACCEPT)

target prot opt source destination

Chain OUTPUT (policy ACCEPT)

target prot opt source destination


****** After stack.sh and create a new VM ******


watrous@watrous-helion:~$ ifconfig

br100 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b3

      inet addr:10.11.12.1  Bcast:10.11.12.255  Mask:255.255.255.0

      inet6 addr: fe80::60ab:b0ff:fe26:9fd/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:887 errors:0 dropped:23 overruns:0 frame:0

      TX packets:23 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:292811 (292.8 KB)  TX bytes:2367 (2.3 KB)

eth0 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b3

      inet6 addr: fe80::21e:c9ff:feb2:40b3/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:669986 errors:0 dropped:4 overruns:0 frame:0

      TX packets:588283 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000

      RX bytes:597137467 (597.1 MB)  TX bytes:593935543 (593.9 MB)

eth1 Link encap:Ethernet HWaddr 00:1e:c9:b2:40:b5

      inet addr:16.85.145.6  Bcast:16.85.147.255  Mask:255.255.252.0

      inet6 addr: fe80::21e:c9ff:feb2:40b5/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:354520 errors:0 dropped:0 overruns:0 frame:0

      TX packets:285578 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000

      RX bytes:313844137 (313.8 MB)  TX bytes:307237034 (307.2 MB)

lo Link encap:Local Loopback

      inet addr:127.0.0.1  Mask:255.0.0.0

      inet6 addr: ::1/128 Scope:Host

      UP LOOPBACK RUNNING  MTU:65536  Metric:1

      RX packets:60309 errors:0 dropped:0 overruns:0 frame:0

      TX packets:60309 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:20401825 (20.4 MB)  TX bytes:20401825 (20.4 MB)

virbr0 Link encap:Ethernet HWaddr 66:0f:58:21:8d:7f

      inet addr:192.168.122.1  Bcast:192.168.122.255

Mask:255.255.255.0

      UP BROADCAST MULTICAST  MTU:1500  Metric:1

      RX packets:0 errors:0 dropped:0 overruns:0 frame:0

      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vnet0 Link encap:Ethernet HWaddr fe:16:3e:f6:f9:b8

      inet6 addr: fe80::fc16:3eff:fef6:f9b8/64 Scope:Link

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

      RX packets:31 errors:0 dropped:0 overruns:0 frame:0

      TX packets:188 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:500

      RX bytes:3004 (3.0 KB)  TX bytes:18215 (18.2 KB)

watrous@watrous-helion:~$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet 169.254.169.254/32 scope link lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br100
state UP group default qlen 1000

link/ether 00:1e:c9:b2:40:b3 brd ff:ff:ff:ff:ff:ff

inet6 fe80::21e:c9ff:feb2:40b3/64 scope link

   valid_lft forever preferred_lft forever

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
group default qlen 1000

link/ether 00:1e:c9:b2:40:b5 brd ff:ff:ff:ff:ff:ff

inet 16.85.145.6/22 brd 16.85.147.255 scope global eth1

   valid_lft forever preferred_lft forever

inet6 fe80::21e:c9ff:feb2:40b5/64 scope link

   valid_lft forever preferred_lft forever

4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN group default

link/ether 66:0f:58:21:8d:7f brd ff:ff:ff:ff:ff:ff

inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

   valid_lft forever preferred_lft forever

5: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP group default

link/ether 00:1e:c9:b2:40:b3 brd ff:ff:ff:ff:ff:ff

inet 10.11.12.1/24 brd 10.11.12.255 scope global br100

   valid_lft forever preferred_lft forever

inet 16.85.145.3/22 brd 16.85.147.255 scope global br100

   valid_lft forever preferred_lft forever

inet6 fe80::60ab:b0ff:fe26:9fd/64 scope link

   valid_lft forever preferred_lft forever

6: vnet0: <BROADCAST,MULTICAST,UP,LOWERUP> mtu 1500 qdisc pfifofast
master br100 state UNKNOWN group default qlen 500

link/ether fe:16:3e:f6:f9:b8 brd ff:ff:ff:ff:ff:ff

inet6 fe80::fc16:3eff:fef6:f9b8/64 scope link

   valid_lft forever preferred_lft forever

watrous@watrous-helion:~$ ip route

default via 16.85.144.1 dev eth1

10.11.12.0/24 dev br100 proto kernel scope link src 10.11.12.1

16.85.144.0/22 dev eth1 proto kernel scope link src 16.85.145.6

16.85.144.0/22 dev br100 proto kernel scope link src 16.85.145.3

192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

watrous@watrous-helion:~$ sudo iptables -L

Chain INPUT (policy ACCEPT)

target prot opt source destination

nova-compute-INPUT all -- anywhere anywhere

nova-network-INPUT all -- anywhere anywhere

nova-api-INPUT all -- anywhere anywhere

Chain FORWARD (policy ACCEPT)

target prot opt source destination

nova-filter-top all -- anywhere anywhere

nova-compute-FORWARD all -- anywhere anywhere

nova-network-FORWARD all -- anywhere anywhere

nova-api-FORWARD all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT)

target prot opt source destination

nova-filter-top all -- anywhere anywhere

nova-compute-OUTPUT all -- anywhere anywhere

nova-network-OUTPUT all -- anywhere anywhere

nova-api-OUTPUT all -- anywhere anywhere

Chain nova-api-FORWARD (1 references)

target prot opt source destination

Chain nova-api-INPUT (1 references)

target prot opt source destination

ACCEPT tcp -- anywhere
watrous-helion.americas.hpqcorp.net tcp dpt:8775

Chain nova-api-OUTPUT (1 references)

target prot opt source destination

Chain nova-api-local (1 references)

target prot opt source destination

Chain nova-compute-FORWARD (1 references)

target prot opt source destination

ACCEPT all -- anywhere anywhere

ACCEPT all -- anywhere anywhere

ACCEPT udp -- 0.0.0.0 255.255.255.255 udp
spt:bootpc dpt:bootps

Chain nova-compute-INPUT (1 references)

target prot opt source destination

ACCEPT udp -- 0.0.0.0 255.255.255.255 udp
spt:bootpc dpt:bootps

Chain nova-compute-OUTPUT (1 references)

target prot opt source destination

Chain nova-compute-inst-2 (1 references)

target prot opt source destination

DROP all -- anywhere anywhere state INVALID

ACCEPT all -- anywhere anywhere state
RELATED,ESTABLISHED

nova-compute-provider all -- anywhere anywhere

ACCEPT udp -- 10.11.12.1 anywhere udp
spt:bootps dpt:bootpc

ACCEPT all -- 10.11.12.0/24 anywhere

nova-compute-sg-fallback all -- anywhere anywhere

Chain nova-compute-local (1 references)

target prot opt source destination

nova-compute-inst-2 all -- anywhere 10.11.12.3

Chain nova-compute-provider (1 references)

target prot opt source destination

Chain nova-compute-sg-fallback (1 references)

target prot opt source destination

DROP all -- anywhere anywhere

Chain nova-filter-top (2 references)

target prot opt source destination

nova-compute-local all -- anywhere anywhere

nova-network-local all -- anywhere anywhere

nova-api-local all -- anywhere anywhere

Chain nova-network-FORWARD (1 references)

target prot opt source destination

ACCEPT all -- anywhere anywhere

ACCEPT all -- anywhere anywhere

Chain nova-network-INPUT (1 references)

target prot opt source destination

ACCEPT udp -- anywhere anywhere udp
dpt:bootps

ACCEPT tcp -- anywhere anywhere tcp
dpt:bootps

ACCEPT udp -- anywhere anywhere udp
dpt:domain

ACCEPT tcp -- anywhere anywhere tcp
dpt:domain

Chain nova-network-OUTPUT (1 references)

target prot opt source destination

Chain nova-network-local (1 references)

target prot opt source destination


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Fwaas test

Hi
When I create a new firewall and add rules and policies the Firewall stays in Pending Create state
http://docs.openstack.org/admin-guide-cloud/content/install_neutron-fwaas-agent.html

Once I create a router and attach an interface to it it moves to Active state. But when I delete this interface and the router the Firewall State still remains Active
Should it not switch back to PENDING CREATE?

Ajay


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Already there is defect for this
https://bugs.launchpad.net/neutron/+bug/1330913

Regards,
Koteswar

From: Ajay Kalambur (akalambu) [mailto:akalambu@cisco.com]
Sent: Friday, August 29, 2014 9:31 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Fwaas test

Hi
When I create a new firewall and add rules and policies the Firewall stays in Pending Create state
http://docs.openstack.org/admin-guide-cloud/content/install_neutron-fwaas-agent.html

Once I create a router and attach an interface to it it moves to Active state. But when I delete this interface and the router the Firewall State still remains Active
Should it not switch back to PENDING CREATE?

Ajay


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Does "nova resize" work for decreasing the server size?

Hi,

“nova resize” works for increasing the server size by changing its flavor.

However, it does not seem to work for decreasing the size.
Should it works? What’s confusing is that it reports “Server resizing is 100% complete/Finished”.

root@all-in-one:~# nova show cirros-1

+--------------------------------------+----------------------------------------------------------+

| Property | Value |

+--------------------------------------+----------------------------------------------------------+

| OS-DCF:diskConfig | MANUAL |

| OS-EXT-AZ:availability_zone | nova |

| OS-EXT-SRV-ATTR:host | all-in-one |

| OS-EXT-SRV-ATTR:hypervisor_hostname | all-in-one.cisco.com |

| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |

| OS-EXT-STS:power_state | 1 |

| OS-EXT-STS:task_state | - |

| OS-EXT-STS:vm_state | active |

| OS-SRV-USG:launched_at | 2014-08-29T18:49:27.000000 |

| OS-SRV-USG:terminated_at | - |

| Private_Net10 network | 10.10.10.5 |

| accessIPv4 | |

| accessIPv6 | |

| config_drive | |

| created | 2014-08-29T18:43:18Z |

| flavor | m1.large (4) |

| hostId | 663a6984eb95f9f0250d4f3775c837bd325feaa2f9e77dbeb7a9d31c |

| id | bc0f6c40-e47b-4b8c-a73c-99ded1ce1b89 |

| image | cirros-0.3.2 (07a033dc-3dde-4d57-897c-1e0974bb1ddf) |

| key_name | - |

| metadata | {} |

| name | cirros-1 |

| os-extended-volumes:volumes_attached | [] |

| progress | 0 |

| security_groups | default |

| status | ACTIVE |

| tenant_id | 70fe7c978d4240619783133dda920574 |

| updated | 2014-08-29T18:49:55Z |

| user_id | 923d9ed9427549edaf7a3cbce18bb3ec |

+--------------------------------------+----------------------------------------------------------+

root@all-in-one:~# nova resize cirros-1 3 --poll

Server resizing... 100% complete

Finished

root@all-in-one:~# nova list

+--------------------------------------+----------+--------+------------+-------------+--------------------------+

| ID | Name | Status | Task State | Power State | Networks |

+--------------------------------------+----------+--------+------------+-------------+--------------------------+

| bc0f6c40-e47b-4b8c-a73c-99ded1ce1b89 | cirros-1 | ACTIVE | - | Running | Private_Net10=10.10.10.5 |

+--------------------------------------+----------+--------+------------+-------------+--------------------------+

root@all-in-one:~# nova resize-confirm cirros-1

ERROR: Cannot 'confirmResize' while instance is in vm_state active (HTTP 409) (Request-ID: req-9aed5cb6-c6ae-4369-aa14-7325fd5178ff)

root@all-in-one:~# nova show cirros-1

+--------------------------------------+----------------------------------------------------------+

| Property | Value |

+--------------------------------------+----------------------------------------------------------+

| OS-DCF:diskConfig | MANUAL |

| OS-EXT-AZ:availability_zone | nova |

| OS-EXT-SRV-ATTR:host | all-in-one |

| OS-EXT-SRV-ATTR:hypervisor_hostname | all-in-one.cisco.com |

| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |

| OS-EXT-STS:power_state | 1 |

| OS-EXT-STS:task_state | - |

| OS-EXT-STS:vm_state | active |

| OS-SRV-USG:launched_at | 2014-08-29T18:49:27.000000 |

| OS-SRV-USG:terminated_at | - |

| Private_Net10 network | 10.10.10.5 |

| accessIPv4 | |

| accessIPv6 | |

| config_drive | |

| created | 2014-08-29T18:43:18Z |

| flavor | m1.large (4) |

| hostId | 663a6984eb95f9f0250d4f3775c837bd325feaa2f9e77dbeb7a9d31c |

| id | bc0f6c40-e47b-4b8c-a73c-99ded1ce1b89 |

| image | cirros-0.3.2 (07a033dc-3dde-4d57-897c-1e0974bb1ddf) |

| key_name | - |

| metadata | {} |

| name | cirros-1 |

| os-extended-volumes:volumes_attached | [] |

| progress | 0 |

| security_groups | default |

| status | ACTIVE |

| tenant_id | 70fe7c978d4240619783133dda920574 |

| updated | 2014-08-29T18:50:18Z |

| user_id | 923d9ed9427549edaf7a3cbce18bb3ec |

+--------------------------------------+----------------------------------------------------------+

root@all-in-one:~#

Thanks,
Danny


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Cisco IAC 4 compatibility?

Which version of Openstack is compatible with Cisco's IAC v4? Is it true v4
was released in January 2014 but only supports the Essex distro?

Mahalo,
Adam


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I don't mean to bump this unnecessarily but wondering is anyone on the list
sits at Cisco and is familiar with IAC compatibility with Openstack
versions? I'll continue checking with my offline contacts in the meantime.

Mahalo,
Adam

Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Fri, Aug 29, 2014 at 4:34 PM, Adam Lawson alawson@aqorn.com wrote:

Which version of Openstack is compatible with Cisco's IAC v4? Is it true
v4 was released in January 2014 but only supports the Essex distro?

Mahalo,
Adam


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [nova] Issues while spawning vm

Hi,

I am trying to spawn a VM using the cirros image and inject the network information into the VM while spawing. However, I am getting the following warnings/errors.

Exception AttributeError: "GuestFS instance has no attribute '_o'" in > ignored?

Ignoring error injecting net into image (aug_get: no matching node)

How do I resolve this issue and be able to inject the network information into the VM??

Regards,
Sanjivini Naikar
?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Heat patch for Icehouse vs Volume deletion.

Hi guys,

I've wrote simple patch that takes care of the volume deletion and it's ugly. I'm posting it here so anybody with better code knowledge on heat can base his patch on it if it's very that ugly :)

For some reasons, sometimes an instance will fail to delete, this takes care of it and will retry until it disappears:
--- instance.py.orig 2014-08-30 22:07:45.259328109 +0000
+++ instance.py 2014-08-30 22:23:03.180527084 +0000
@@ -587,8 +587,13 @@
self.resourceidset(None)
break
elif server.status == "ERROR":
- raise exception.Error(("Deletion of server %s failed.") %
- server.id)
+ while server.status == "ERROR":
+ nova
utils.refreshserver(server)
+ server.reset
state()
+ server.delete()
+ if server.status == "DELETED":
+ self.resourceidset(None)
+ break
except clients.novaclient.exceptions.NotFound:
self.resourceidset(None)
break

This patch is based on the following bug: https://bugs.launchpad.net/heat/+bug/1298350 => https://review.openstack.org/#/c/86638/
It didn't work for me but this one works flawlessly.

--- volume.py.icehouse 2014-08-30 00:59:49.844290344 +0000

+++ volume.py 2014-08-30 21:58:08.769813146 +0000

@@ -151,11 +151,11 @@

             if vol.status == 'in-use':

                 logger.warn(_('can not delete volume when in-use'))

                 raise exception.Error(_('Volume in use'))

-

  • vol.delete()

  • while True:

  • yield

  • vol.get()

  • if vol.status != 'deleting':

  • vol.delete()

  • while True:

  • yield

  • vol.get()

    except clients.cinderclient.exceptions.NotFound:

    self.resourceidset(None)

@@ -227,69 +227,187 @@

     logger.info(_('%s - complete') % str(self))

-

-class VolumeDetachTask(object):

+class VolumeDeleteTask(object):

 """A task for detaching a volume from a Nova server."""
  • def init(self, stack, serverid, attachmentid):

  • def init(self, stack, volume_id):

    """

    Initialise with the stack (for obtaining the clients), and the IDs of

    the server and volume.

    """

    self.clients = stack.clients

  • self.serverid = serverid

  • self.attachmentid = attachmentid

  • self.volumeid = volumeid

    def str(self):

    """Return a human-readable string description of the task."""

  • return _('Removing attachment %(att)s from Instance %(srv)s') % {

  • 'att': self.attachmentid, 'srv': self.serverid}

  • return _('Deleting volume %(vol)s') % {

  • 'vol': self.volume_id}

    def repr(self):

    """Return a brief string description of the task."""

  • return '%s(%s -/> %s)' % (type(self).name,

  • self.attachment_id,

  • self.server_id)

  • return '%s(%s)' % (type(self).name,

  • self.volume_id)

    def call(self):

    """Return a co-routine which runs the task."""

    logger.debug(str(self))

  • server_api = self.clients.nova().volumes

-

  • get reference to the volume while it is attached

    try:

  • novavol = serverapi.getservervolume(self.server_id,

  • self.attachment_id)

  • vol = self.clients.cinder().volumes.get(nova_vol.id)

  • except (clients.cinderclient.exceptions.NotFound,

  • clients.novaclient.exceptions.BadRequest,

  • clients.novaclient.exceptions.NotFound):

  • vol = self.clients.cinder().volumes.get(self.volume_id)

  • if vol.status in ('available'):

  • vol.delete()

  • yield

  • while vol.status in ('available', 'deleting'):

  • logger.debug(_('%s - volume is still %s') % ( str(self), vol.status ))

  • yield

  • vol.get()

  • logger.info(_('%(name)s - status: %(status)s') % {

  • 'name': str(self), 'status': vol.status})

+

  • except clients.cinderclient.exceptions.NotFound:

    logger.warning(_('%s - volume not found') % str(self))

    return

  • except clients.cinderclient.exceptions.BadRequest:

  • msg = _('Failed to delete %(vol)s') % {

  • 'vol': self.volume_id}

  • raise exception.Error(msg)

  • # detach the volume using volume_attachment

  • try:

  • serverapi.deleteservervolume(self.serverid, self.attachment_id)

  • except (clients.novaclient.exceptions.BadRequest,

  • clients.novaclient.exceptions.NotFound) as e:

  • logger.warning('%(res)s - %(err)s' % {'res': str(self),

  • 'err': str(e)})

  • logger.info(_('%s - complete') % str(self))

  • yield

+class VolumeAttachment(resource.Resource):

  • PROPERTIES = (

  • INSTANCEID, VOLUMEID, DEVICE,

  • ) = (

  • 'InstanceId', 'VolumeId', 'Device',

  • )

+

  • properties_schema = {

  • INSTANCE_ID: properties.Schema(

  • properties.Schema.STRING,

  • _('The ID of the instance to which the volume attaches.'),

  • required=True,

  • update_allowed=True

  • ),

  • VOLUME_ID: properties.Schema(

  • properties.Schema.STRING,

  • _('The ID of the volume to be attached.'),

  • required=True,

  • update_allowed=True

  • ),

  • DEVICE: properties.Schema(

  • properties.Schema.STRING,

  • _('The device where the volume is exposed on the instance. This '

  • 'assignment may not be honored and it is advised that the path '

  • '/dev/disk/by-id/virtio- be used instead.'),

  • required=True,

  • update_allowed=True,

  • constraints=[

  • constraints.AllowedPattern('/dev/vd[b-z]'),

  • ]

  • ),

  • }

+

  • updateallowedkeys = ('Properties',)

+

  • def handle_create(self):

  • serverid = self.properties[self.INSTANCEID]

  • volumeid = self.properties[self.VOLUMEID]

  • dev = self.properties[self.DEVICE]

+

  • attachtask = VolumeAttachTask(self.stack, serverid, volume_id, dev)

  • attachrunner = scheduler.TaskRunner(attachtask)

+

  • attach_runner.start()

+

  • self.resourceidset(attachtask.attachmentid)

+

  • return attach_runner

+

  • def checkcreatecomplete(self, attach_runner):

  • return attach_runner.step()

+

  • def handle_delete(self):

  • serverid = self.properties[self.INSTANCEID]

  • volumeid = self.properties[self.VOLUMEID]

  • detachtask = VolumeDetachTask(self.stack, serverid, volume_id)

  • scheduler.TaskRunner(detach_task)()

  • deletetask = VolumeDeleteTask(self.stack,volumeid)

  • scheduler.TaskRunner(delete_task)()

+

  • def handleupdate(self, jsonsnippet, tmpldiff, propdiff):

  • checkers = []

  • if prop_diff:

  • volumeid = self.properties.get(self.VOLUMEID)

  • serverid = self.properties.get(self.INSTANCEID)

  • detachtask = VolumeDetachTask(self.stack, serverid, volume_id)

  • checkers.append(scheduler.TaskRunner(detach_task))

+

  • if self.VOLUMEID in propdiff:

  • volumeid = propdiff.get(self.VOLUME_ID)

  • device = self.properties.get(self.DEVICE)

  • if self.DEVICE in prop_diff:

  • device = prop_diff.get(self.DEVICE)

  • if self.INSTANCEID in propdiff:

  • serverid = propdiff.get(self.INSTANCE_ID)

  • attachtask = VolumeAttachTask(self.stack, serverid,

  • volume_id, device)

+

  • checkers.append(scheduler.TaskRunner(attach_task))

+

  • if checkers:

  • checkers[0].start()

  • return checkers

+

  • def checkupdatecomplete(self, checkers):

  • for checker in checkers:

  • if not checker.started():

  • checker.start()

  • if not checker.step():

  • return False

  • self.resourceidset(checkers[-1].task.attachmentid)

  • return True

+

+

+class VolumeDetachTask(object):

  • """A task for detaching a volume from a Nova server."""

+

  • def init(self, stack, serverid, volumeid):

  • """

  • Initialise with the stack (for obtaining the clients), and the IDs of

  • the server and volume.

  • """

  • self.clients = stack.clients

  • self.serverid = serverid

  • self.volumeid = volumeid

+

  • def str(self):

  • """Return a human-readable string description of the task."""

  • return _('Removing volume %(vol)s from Instance %(srv)s') % {

  • 'vol': self.volumeid, 'srv': self.serverid}

+

  • def repr(self):

  • """Return a brief string description of the task."""

  • return '%s(%s -/> %s)' % (type(self).name,

  • self.volume_id,

  • self.server_id)

+

  • def call(self):

  • """Return a co-routine which runs the task."""

  • logger.debug(str(self))

    try:

  • vol.get()

  • vol = self.clients.cinder().volumes.get(self.volume_id)

  • attachedto = [att['serverid'] for att in vol.attachments]

  • if self.serverid not in attachedto:

  • msg = _('Volume %(vol)s is not attached to server %(srv)s') % {

  • 'vol': self.volumeid, 'srv': self.serverid}

  • raise exception.Error(msg)

+

  • vol.detach()

  • yield

    while vol.status in ('in-use', 'detaching'):

    logger.debug(_('%s - volume still in use') % str(self))

    yield

-

  • try:

  • serverapi.deleteservervolume(self.serverid,

  • self.attachment_id)

  • except (clients.novaclient.exceptions.BadRequest,

  • clients.novaclient.exceptions.NotFound):

  • pass

    vol.get()

    logger.info(_('%(name)s - status: %(status)s') % {

@@ -299,27 +417,32 @@

     except clients.cinderclient.exceptions.NotFound:

         logger.warning(_('%s - volume not found') % str(self))
  • return

  • except clients.cinderclient.exceptions.BadRequest:

  • msg = _('Failed to detach %(vol)s from server %(srv)s') % {

  • 'vol': self.volumeid, 'srv': self.serverid}

  • raise exception.Error(msg)

    # The next check is needed for immediate reattachment when updating:

  • # as the volume info is taken from cinder, but the detach

  • # request is sent to nova, there might be some time

  • # between cinder marking volume as 'available' and

  • # nova removing attachment from it's own objects, so we

  • # there might be some time between cinder marking volume as 'available'

  • and nova removing attachment from it's own objects, so we

    # check that nova already knows that the volume is detached

  • def serverhasattachment(serverid, attachmentid):

  • try:

  • serverapi.getservervolume(serverid, attachment_id)

  • except clients.novaclient.exceptions.NotFound:

  • server_api = self.clients.nova().volumes

+

  • def serverhasattachment(serverid, volumeid):

  • vol = self.clients.cinder().volumes.get(self.volume_id)

  • attachedto = [att['serverid'] for att in vol.attachments]

  • if self.serverid not in attachedto:

    return False

  • return True

  • else:

  • return True

  • while serverhasattachment(self.serverid, self.attachmentid):

  • logger.info(_("Server %(srv)s still has attachment %(att)s.") %

  • {'att': self.attachmentid, 'srv': self.serverid})

  • while serverhasattachment(self.serverid, self.volumeid):

  • logger.info(_("Server %(srv)s still has %(vol)s attached.") %

  • {'vol': self.volumeid, 'srv': self.serverid})

    yield

  • logger.info(_("Volume %(vol)s is detached from server %(srv)s") %

  • {'vol': vol.id, 'srv': self.server_id})

  • logger.info(_('%s - complete') % str(self))

    class VolumeAttachment(resource.Resource):

@@ -376,29 +499,25 @@

 def handle_delete(self):

     server_id = self.properties[self.INSTANCE_ID]
  • detachtask = VolumeDetachTask(self.stack, serverid, self.resource_id)

  • volumeid = self.properties[self.VOLUMEID]

  • detachtask = VolumeDetachTask(self.stack, serverid, volume_id)

    scheduler.TaskRunner(detach_task)()

  • deletetask = VolumeDeleteTask(self.stack,volumeid)

  • scheduler.TaskRunner(delete_task)()

    def handleupdate(self, jsonsnippet, tmpldiff, propdiff):

    checkers = []

    if prop_diff:

  • # Even though some combinations of changed properties

  • # could be updated in UpdateReplace manner,

  • # we still first detach the old resource so that

  • self.resource_id is not replaced prematurely

    volumeid = self.properties.get(self.VOLUMEID)

  • serverid = self.properties.get(self.INSTANCEID)

  • detachtask = VolumeDetachTask(self.stack, serverid, volume_id)

  • checkers.append(scheduler.TaskRunner(detach_task))

+

         if self.VOLUME_ID in prop_diff:

             volume_id = prop_diff.get(self.VOLUME_ID)

-

         device = self.properties.get(self.DEVICE)

         if self.DEVICE in prop_diff:

             device = prop_diff.get(self.DEVICE)

-

  • serverid = self.properties.get(self.INSTANCEID)

  • detachtask = VolumeDetachTask(self.stack, serverid,

  • self.resource_id)

  • checkers.append(scheduler.TaskRunner(detach_task))

-

         if self.INSTANCE_ID in prop_diff:

             server_id = prop_diff.get(self.INSTANCE_ID)

         attach_task = VolumeAttachTask(self.stack, server_id,

@@ -419,7 +538,6 @@

     self.resource_id_set(checkers[-1]._task.attachment_id)

     return True

-

class CinderVolume(Volume):

 PROPERTIES = (


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] How to Fix "Invalid OpenStack Identity credentials"?

Hi there,
When I executed the following command, the above mentioned message was
shown. How can I solve the problem?
keystone tenant-create --name=admin --description="Admin Tenant"
Thanks is advance.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Yes, I defined my token and service endpoint as follow:
export OSSERVICETOKEN=572aa9b4424d4c6dfbe5a794c253a1b4
export OSSERVICEENDPOINT=http://10.0.0.1:35357/v2.0

But it doesn't work at all.

On Sun, Aug 31, 2014 at 9:45 PM, Anne Gentle annegentle@justwriteclick.com
wrote:

Hi there - make sure you've got the service token in your environment as
described here:

http://docs.openstack.org/icehouse/install-guide/install/apt/content/keystone-users.html

It provides a way to bootstrap that first set of user creation steps. Feel
free to ask for more explanation if the page doesn't explain enough.

Anne

Anne Gentle
Content Stacker
anne@openstack.org

On Aug 31, 2014, at 11:39 AM, Hossein Zabolzadeh zabolzadeh@gmail.com
wrote:

Hi there,
When I executed the following command, the above mentioned message was
shown. How can I solve the problem?
keystone tenant-create --name=admin --description="Admin Tenant"
Thanks is advance.


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] OpenStack Icehouse Installation on Redhat7

Hello.

I'm now manually installing the latest OpenStack Icehouse on Redhat7, but
I'm suffering from a problem; an instance cannot get IP address from qdhcp
on Network Node.

I don't know what exactly problem is, but I found that the Redhat7
environment seems a little different with the other OS environments. That
is, there is no neutron-openvswitch-agent file in /etc/init.d directory. As
the OpenStack installation guide introduced, the command "cp
/etc/init.d/neutron-openvswitch-agent
/etc/init.d/neutron-openvswitch-agent.orig" needs to be done, but I
couldn't.

Is there any guy who successfully installed OpenStack Icehouse with 3 node
deployment on Redhat7?

Any help would really be appreciated.

Best regards

Byeong-Gi KIM


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Neutron] VXLAN configuration - host machine unable to ping other machines in the LAN

Hello All,

I am trying to setup a single node openstack vxlan configuration.
This is an excerpt from the localrc file:

QPLUGIN=ml2QML2TENANTNETWORK_TYPE=vxlan

after running stack.sh, I made changes to the ml2conf.ini file so that it
looked like below for the tags shown:
[ml2]
tenant
networktypes = vxlan
type
drivers = local,vxlan
mechanism_drivers = openvswitch

[ml2typegre]
tunnelidranges = 32769:34000
[ml2typevxlan]
vni_ranges = 65537:69999

[securitygroup]
firewalldriver =
neutron.agent.linux.iptables
firewall.OVSHybridIptablesFirewallDriver
enablesecuritygroup = True

[ovs]
localip = 127.0.0.1
tunnel
type = vxlan
tunnelbridge = br-tun
integration
bridge = br-int
tunnelidranges = 65537:69999
tenantnetworktype = vxlan
enable_tunneling = true

[agent]
tunneltypes = vxlan
root
helper = sudo /usr/local/bin/neutron-rootwrap
/etc/neutron/rootwrap.conf
tunneltypes = vxlan
l2
population = False

I am able to successfully launch instances. But I am not able to ping other
hosts in the LAN. (Cant ping from instances as well as host machine)

br-ex, br-int and br-tun are the bridges created.
eth0 and eth1 are the interfaces on the machine.

eth0 is attached to br-tun and is in promisc mode. br-tun has the host IP.

eth1 is attached to br-ex and is in promisc mode. br-ex is in the Openstack
Public address pool.

Could you please help as to where I might be going wrong?
I refered to the below link for the configuration:
http://www.opencloudblog.com/?p=300

Regards
Prabhu


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

local_ip should be an IP address of network interface used for VXLAN
network instead of 127.0.0.1.

On Mon, Sep 1, 2014 at 8:02 PM, prabhuling kalyani prabhu.nk@gmail.com wrote:
Hello All,

I am trying to setup a single node openstack vxlan configuration.
This is an excerpt from the localrc file:
QPLUGIN=ml2
Q
ML2TENANTNETWORK_TYPE=vxlan

after running stack.sh, I made changes to the ml2conf.ini file so that it
looked like below for the tags shown:
[ml2]
tenant
networktypes = vxlan
type
drivers = local,vxlan
mechanism_drivers = openvswitch

[ml2typegre]
tunnelidranges = 32769:34000
[ml2typevxlan]
vni_ranges = 65537:69999

[securitygroup]
firewalldriver =
neutron.agent.linux.iptables
firewall.OVSHybridIptablesFirewallDriver
enablesecuritygroup = True

[ovs]
localip = 127.0.0.1
tunnel
type = vxlan
tunnelbridge = br-tun
integration
bridge = br-int
tunnelidranges = 65537:69999
tenantnetworktype = vxlan
enable_tunneling = true

[agent]
tunneltypes = vxlan
root
helper = sudo /usr/local/bin/neutron-rootwrap
/etc/neutron/rootwrap.conf
tunneltypes = vxlan
l2
population = False

I am able to successfully launch instances. But I am not able to ping other
hosts in the LAN. (Cant ping from instances as well as host machine)

br-ex, br-int and br-tun are the bridges created.
eth0 and eth1 are the interfaces on the machine.

eth0 is attached to br-tun and is in promisc mode. br-tun has the host IP.

eth1 is attached to br-ex and is in promisc mode. br-ex is in the Openstack
Public address pool.

Could you please help as to where I might be going wrong?
I refered to the below link for the configuration:
http://www.opencloudblog.com/?p=300

Regards
Prabhu


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Some documentation on the Docker plugin for Heat

Hello all,

I thought some folks might find this of interest. I've recently spent
some time looking at the Docker plugin for Heat, and wrote an article
about using it:

http://blog.oddbit.com/2014/08/30/docker-plugin-for-openstack-he/

And produced some annotated documentation for the plugin itself:

http://blog.oddbit.com/2014/08/30/docker-contain-doc/

Cheers,

--
Lars Kellogg-Stedman lars@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack | http://blog.oddbit.com/


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [OpenStack] [nova] [horizon] blueprint "serial-ports" - Exploitation from horizon?

The blueprint "serial-ports" [1] provides an alternative to VNC and
SPICE. This is exposed via API to the user. Are there any plans to
exploit this (pretty cool) feature with horizon?

[1] https://blueprints.launchpad.net/nova/+spec/serial-ports

Regards,
Markus


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] How to reach your instances without public ip, without floating ip

I've written and here contribute for your hacking pleasure a couple of
things I've found useful.

The first is an ability to ssh to an arbitrary instance inside an OpenStack
cloud, without having any public IP. https://github.com/donbowman/ssh-jump

The second is an ability to vpn to an arbitrary instance instance inside an
OpenStack cloud, also without public IP.
https://github.com/donbowman/sstp-proxy

These work properly with namespaces and with multiple compute/network
nodes. For the 'ssh jump', I created a 'jump' user (which doesn't allow
interactive login) on the l3 router node. This allows users to simply 'ssh
me@myhost+cloud', and the +cloud does all the magic.

For the 'sstp proxy', I parse the SSTP url to extract the tenant/user/host,
and then proxy an SSTP session in to that host. I used softether on the
host.

I find the former (ssh) very useful for e.g. scp, port-forward, generally
accessing my instances. The 2nd is nice because it allows an external host
to become 'inside' your Heat stack.

Enjoy and fork @ will.

--don


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi!

Why not disable NAT at the L3 router? And then, just create a tenant's
subnet with a public IPv4 addrs?

Or, just use IPv6... =P

On 1 September 2014 14:41, Don Waterloo don.waterloo@gmail.com wrote:

I've written and here contribute for your hacking pleasure a couple of
things I've found useful.

The first is an ability to ssh to an arbitrary instance inside an
OpenStack cloud, without having any public IP.
https://github.com/donbowman/ssh-jump

The second is an ability to vpn to an arbitrary instance instance inside
an OpenStack cloud, also without public IP.
https://github.com/donbowman/sstp-proxy

These work properly with namespaces and with multiple compute/network
nodes. For the 'ssh jump', I created a 'jump' user (which doesn't allow
interactive login) on the l3 router node. This allows users to simply 'ssh
me@myhost+cloud', and the +cloud does all the magic.

For the 'sstp proxy', I parse the SSTP url to extract the
tenant/user/host, and then proxy an SSTP session in to that host. I used
softether on the host.

I find the former (ssh) very useful for e.g. scp, port-forward, generally
accessing my instances. The 2nd is nice because it allows an external host
to become 'inside' your Heat stack.

Enjoy and fork @ will.

--don


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Windows 2008 R2 instances now blue screen of death when running on different host hardware

Greetings all,

We have recently begun running on a somewhat different class of hardware
and very stable win 2k r2 instances now, occationally, blue screen.

The operating system, kvm, libvirt, etc.. are the same (or similar -
there might be slightly newer versions of some of them) so I'm assuming
it's something to do with the fact that the images were created on the
older hardware.

Does anyone have any suggestions as to where I should look?

Thanks
JR


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Is your compute node, IceHouse + Ubuntu 14.04 + Linux <=3.16.0-32-generic?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Heat] VolumeAttachement Groups

Hello,

I would like to know if it's possible to use a Resource Group in Heat
templates to create cinder volumes and then another Resource Group to
attach these volumes to a VM.
So for a VM with 10 volumes we'll need 3 resources, 1 for the VM another
(ResourceGroup) for the creation of 10 volumes and a third
(ResourceGroup) for the attachement of the volumes.
I'm using Heat 0.2.8.

Best Regards,


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Tue, Sep 02, 2014 at 04:19:49PM +0200, Abbass MAROUNI wrote:
Hello,

I would like to know if it's possible to use a Resource Group in Heat
templates to create cinder volumes and then another Resource Group to attach
these volumes to a VM.
So for a VM with 10 volumes we'll need 3 resources, 1 for the VM another
(ResourceGroup) for the creation of 10 volumes and a third (ResourceGroup)
for the attachement of the volumes.

Yes, this is possible, but not quite via the method you describe.

You need to create a number of nested stacks via one ResourceGroup, where
the nested stack contains the volume and volume attachment resources (e.g
the things you want to group), then you create the server and pass the ID
in to all the nested stacks via a property specified in the resource group
definition.

I've just posted an example which shows how this works:

https://review.openstack.org/119015

Steve

[Openstack] [trove] trove + qpid

Hi there,

I've been following the CentOS guide for Icehouse [1] and I got
everything working correctly but trove.

It looks like trove and qpid might not work well together as per [2].

Has anyone managed to get Trove to work along with qpid? If yes, would
you mind sharing your /etc/trove/*.conf files with me?

Many thanks!!

David

[1]
http://docs.openstack.org/icehouse/install-guide/install/yum/content/ch_trove.html
[2] https://bugs.launchpad.net/trove/+bug/1214119


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 9/2/2014 10:32 AM, David Pintor wrote:
Hi there,

I've been following the CentOS guide for Icehouse [1] and I got
everything working correctly but trove.

It looks like trove and qpid might not work well together as per [2].

Has anyone managed to get Trove to work along with qpid? If yes, would
you mind sharing your /etc/trove/*.conf files with me?

Many thanks!!

David

[1]
http://docs.openstack.org/icehouse/install-guide/install/yum/content/ch_trove.html

[2] https://bugs.launchpad.net/trove/+bug/1214119


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

One thing I ran into was trove isn't using oslo.messaging like the other
projects so when I was configuring trove config files I set
rpcbackend=qpid which blew up. Until trove is using oslo.messaging you
have to specify the full module path to the impl
qpid rpc module in trove.

--

Thanks,

Matt Riedemann

[Openstack] Getting started with Dev

Hi Everyone,

I came to know about OpenStack a year back. And I have contributed to a
couple of opensource projects the past few years. And now, having known
about openstack a little more, l would like to contribute to OpenStack and
gain more knowledge in the process.

I started with the getting started tasks and there is one task which says
"create an openstack profile". This asks for an affiliation which I don't
have. So does that mean, I am not eligible for contributions?

Thanks,
Sharan Kumar M


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

In order to be able to submit contributions to Openstack you need to create
an account in: https://launchpad.net/openstack/+login , with this you will
be able to report bugs, assign to blueprints, send patchs to gerrit, etc.

On 2 September 2014 14:13, Sharan Kumar M sharan.monikantan@gmail.com
wrote:

Hi Everyone,

I came to know about OpenStack a year back. And I have contributed to a
couple of opensource projects the past few years. And now, having known
about openstack a little more, l would like to contribute to OpenStack and
gain more knowledge in the process.

I started with the getting started tasks and there is one task which says
"create an openstack profile". This asks for an affiliation which I don't
have. So does that mean, I am not eligible for contributions?

Thanks,
Sharan Kumar M


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Heat template works in command line but fails at dashboard

Hi All,

I can launch a template from CLI by running 'heat stack-create -f
mytemplate.yaml', but I cannot launch it from dashboard because it get
the wrong security group.

I have a security group parameter defined like this:

mysecuritygroup:
type: commadelimitedlist
label: 'Node ddddd Security Group(s)'
default:
- default

Then I used getparam to get this parameter
security
groups:
getparam: mysecurity_group

When launched from dashboard, the value of security_groups becomes
'[u'default']', which is suppose to be 'default'.
Anyone ever had same issue like this?

Thanks

Tao


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

ZHOU TAO A tao.a.zhou@alcatel-lucent.com wrote on 09/02/2014 10:23:34
PM:

I can launch a template from CLI by running 'heat stack-create -f
mytemplate.yaml', but I cannot launch it from dashboard because it get
the wrong security group.

I have a security group parameter defined like this:

mysecuritygroup:
type: commadelimitedlist
label: 'Node ddddd Security Group(s)'
default:
- default

Then I used getparam to get this parameter
security
groups:
getparam: mysecurity_group

When launched from dashboard, the value of security_groups becomes
'[u'default']', which is suppose to be 'default'.

No, I have not had a problem like this myself.

It looks to me like the dashboard is doing it right. The template snippet
you exhibit sets the default to be a list of one string (review YAML
syntax). Without seeing more details, I can not say why a correct value
for the mysecuritygroup parameter would lead to a failure. You may have
multiple problems mixed together here.

Regards,
Mike

[Openstack] Open vSwitch agent - Down - Error on logs, it doesn't come up anymore...

Guys,

One of my Compute Nodes, have its Open vSwitch agent in a Down state, I'm
seeing the following errors after tryign to restart
neutron-plugin-openvswitch-agent:


==> /var/log/neutron/openvswitch-agent.log <==
2014-09-02 20:18:13.665 5503 ERROR neutron.agent.linux.ovsdbmonitor
[req-91a64dfc-0902-49e5-8d55-e9bcd31d69b3 None] Error received from ovsdb
monitor: 2014-09-02T23:18:13Z|00001|fatal
signal|WARN|terminating with
signal 15 (Terminated)
2014-09-02 20:18:15.492 5503 CRITICAL neutron
[req-91a64dfc-0902-49e5-8d55-e9bcd31d69b3 None] Trying to re-send() an
already-triggered event.

I have about 15 Compute Nodes, only 1 with this error...

What can I do?!

Thanks!
Thiago


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

BTW, I'm running IceHouse with Ubuntu 14.04.1, VLAN Provider Networks (no
GRE, no VXLAN)...

On 3 September 2014 01:27, Martinx - ジェームズ thiagocmartinsc@gmail.com
wrote:

Guys,

One of my Compute Nodes, have its Open vSwitch agent in a Down state, I'm
seeing the following errors after tryign to restart
neutron-plugin-openvswitch-agent:


==> /var/log/neutron/openvswitch-agent.log <==
2014-09-02 20:18:13.665 5503 ERROR neutron.agent.linux.ovsdbmonitor
[req-91a64dfc-0902-49e5-8d55-e9bcd31d69b3 None] Error received from ovsdb
monitor: 2014-09-02T23:18:13Z|00001|fatal
signal|WARN|terminating with
signal 15 (Terminated)
2014-09-02 20:18:15.492 5503 CRITICAL neutron
[req-91a64dfc-0902-49e5-8d55-e9bcd31d69b3 None] Trying to re-send() an
already-triggered event.

I have about 15 Compute Nodes, only 1 with this error...

What can I do?!

Thanks!
Thiago


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] Welcome to the "Openstack" mailing list (Digest mode)

刘刚

From: openstack-request
Date: 2014-09-03 11:19
To: liugang12
Subject: Welcome to the "Openstack" mailing list (Digest mode)
Welcome to the Openstack@lists.openstack.org mailing list!

To post to this list, send your email to:

openstack@lists.openstack.org

General information about the mailing list is at:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

If you ever want to unsubscribe or change your options (eg, switch to
or from digest mode, change your password, etc.), visit your
subscription page at:

http://lists.openstack.org/cgi-bin/mailman/options/openstack/liugang12%40sgepri.sgcc.com.cn

You can also make such adjustments via email by sending a message to:

Openstack-request@lists.openstack.org

with the word `help' in the subject or body (don't include the
quotes), and you will get back a message with instructions.

You must know your password to change your options (including changing
the password, itself) or to unsubscribe. It is:

LENOVO1983

Normally, Mailman will remind you of your lists.openstack.org mailing
list passwords once every month, although you can disable this if you
prefer. This reminder will also include instructions on how to
unsubscribe or change your account options. There is also a button on
your options page that will email your current password to you._______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] [openstack-dev] [H][Neutron][IPSecVPN]Cannot tunnel two namespace Routers

Hi,

I did the same in the past for demo, and it worked well.
Does secgroup of VM2 allow connections from VM1?

2014年9月3日水曜日、Germy Luregermy.lure@gmail.comさんは書きました:

Hi Stackers,

Network TOPO like this: VM1(net1)--Router1-------IPSec VPN
tunnel-------Router2--VM2(net2)
If left and right side deploy on different OpenStack environments, it
works well. But in the same environment, Router1 and Router2 are namespace
implement in the same network node. I cannot ping from VM1 to VM2.

In R2(Router2), tcpdump tool tells us that R2 receives ICMP echo request
packets but doesnt send them out.

7837C113-D21D-B211-9630-000000821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-
b5a1-dd987c0231ef tcpdump -i any *
*tcpdump: verbose output suppressed, use -v or -vv for full protocol
decode

listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535
bytes

* 11:50:14.853470 IP 10.10.5.2 > 10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e6), length 132*
11:50:14.853470 IP 128.6.25.2 > 128.6.26.2 http://128.6.26.2: ICMP echo
request, id 44567, seq 486, length 64

* 11:50:15.853475 IP 10.10.5.2 > 10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e7), length 132*
11:50:15.853475 IP 128.6.25.2 > 128.6.26.2 http://128.6.26.2: ICMP echo
request, id 44567, seq 487, length 64

* 11:50:16.853461 IP 10.10.5.2 > 10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e8), length 132*
11:50:16.853461 IP 128.6.25.2 > 128.6.26.2 http://128.6.26.2: ICMP echo
request, id 44567, seq 488, length 64

* 11:50:17.853447 IP 10.10.5.2 > 10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e9), length 132*
11:50:17.853447 IP 128.6.25.2 > 128.6.26.2 http://128.6.26.2: ICMP echo
request, id 44567, seq 489, length 64

* ^C*
8 packets captured
8 packets received by filter
0 packets dropped by kernel

ip addr in R2:

7837C113-D21D-B211-9630-000000821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-b5a1-dd987c0231ef ip addr
187: lo: <LOOPBACK,UP,LOWERUP> mtu 16436 qdisc noqueue state UNKNOWN
group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid
lft forever preferredlft forever
206: qr-4bacb61c-72: <BROADCAST,UP,LOWER
UP> mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether fa:16:3e:23:10:97 brd ff:ff:ff:ff:ff:ff
inet 128.6.26.1/24 brd 128.6.26.255 scope global qr-4bacb61c-72
inet6 fe80::f816:3eff:fe23:1097/64 scope link
validlft forever preferredlft forever
208: qg-4abd4bb0-21: <BROADCAST,UP,LOWERUP> mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether fa:16:3e:e6:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.10.5.3/24 brd 10.10.5.255 scope global qg-4abd4bb0-21
inet6 fe80::f816:3eff:fee6:cd1a/64 scope link
valid
lft forever preferred_lft forever

In addition, the kernel counter "/proc/net/snmp" in namespace is
unchanged. These couters do not work well with namespace?

BR,
Germy


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

Thanks for your response.
I just disabled SEG function(deployed in compute nodes).
The ICMP packets even hadn't leave network node. I cannot tcpdump packet on
qr-xx interface.

Can you introduce your demo? Havana or Icehouce? Network node kernel
version? etc.

2014-09-03 12:38 GMT+08:00 Akihiro Motoki amotoki@gmail.com:

Hi,

I did the same in the past for demo, and it worked well.
Does secgroup of VM2 allow connections from VM1?

2014年9月3日水曜日、Germy Luregermy.lure@gmail.comさんは書きました:

Hi Stackers,

Network TOPO like this: VM1(net1)--Router1-------IPSec VPN
tunnel-------Router2--VM2(net2)
If left and right side deploy on different OpenStack environments, it
works well. But in the same environment, Router1 and Router2 are namespace
implement in the same network node. I cannot ping from VM1 to VM2.

In R2(Router2), tcpdump tool tells us that R2 receives ICMP echo request
packets but doesnt send them out.

7837C113-D21D-B211-9630-000000821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-
b5a1-dd987c0231ef tcpdump -i any *
*tcpdump: verbose output suppressed, use -v or -vv for full protocol
decode

listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535
bytes

* 11:50:14.853470 IP 10.10.5.2 > 10.10.5.3 :
ESP(spi=0xc6d65c02,seq=0x1e6), length 132*
11:50:14.853470 IP 128.6.25.2 > 128.6.26.2 : ICMP
echo request, id 44567, seq 486, length 64

* 11:50:15.853475 IP 10.10.5.2 > 10.10.5.3 :
ESP(spi=0xc6d65c02,seq=0x1e7), length 132*
11:50:15.853475 IP 128.6.25.2 > 128.6.26.2 : ICMP
echo request, id 44567, seq 487, length 64

* 11:50:16.853461 IP 10.10.5.2 > 10.10.5.3 :
ESP(spi=0xc6d65c02,seq=0x1e8), length 132*
11:50:16.853461 IP 128.6.25.2 > 128.6.26.2 : ICMP
echo request, id 44567, seq 488, length 64

* 11:50:17.853447 IP 10.10.5.2 > 10.10.5.3 :
ESP(spi=0xc6d65c02,seq=0x1e9), length 132*
11:50:17.853447 IP 128.6.25.2 > 128.6.26.2 : ICMP
echo request, id 44567, seq 489, length 64

* ^C*
8 packets captured
8 packets received by filter
0 packets dropped by kernel

ip addr in R2:

7837C113-D21D-B211-9630-000000821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-b5a1-dd987c0231ef ip addr
187: lo: <LOOPBACK,UP,LOWERUP> mtu 16436 qdisc noqueue state UNKNOWN
group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid
lft forever preferredlft forever
206: qr-4bacb61c-72: <BROADCAST,UP,LOWER
UP> mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether fa:16:3e:23:10:97 brd ff:ff:ff:ff:ff:ff
inet 128.6.26.1/24 brd 128.6.26.255 scope global qr-4bacb61c-72
inet6 fe80::f816:3eff:fe23:1097/64 scope link
validlft forever preferredlft forever
208: qg-4abd4bb0-21: <BROADCAST,UP,LOWERUP> mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether fa:16:3e:e6:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.10.5.3/24 brd 10.10.5.255 scope global qg-4abd4bb0-21
inet6 fe80::f816:3eff:fee6:cd1a/64 scope link
valid
lft forever preferred_lft forever

In addition, the kernel counter "/proc/net/snmp" in namespace is
unchanged. These couters do not work well with namespace?

BR,
Germy


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Query regarding using Openstack

We are undergraduate students and for our undergraduate research work we are planning to build Openstack Cloud in our Linux machine. For better security purpose of our Cloud we are planning to develop a Intrusion Detection Technique using Hidden Markov Model. In order to develop the HMM we planned to use Hidden Markov Model Toolkit (htk). Will this be a good decision? Can you suggest us is there any related work done in this area which might help us accomplishing our research.

Thank You

Zunayeed, Ahsan_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi:

As first approach, you could try to look at Security Guide (
http://docs.openstack.org/security-guide/content/) or consider using the
Neutron plugin framework. Anyway, a good point for getting more help in
this area could be the opensatck-dev list (
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev) and
Neutron IRC channel (https://wiki.openstack.org/wiki/IRC)

"OpenStack Networking has an extension framework allowing additional
network services, such as intrusion detection systems (IDS), load
balancing, firewalls and virtual private networks (VPN) to be deployed and
managed."

Regards,


JuanFra Rodriguez Cardoso

2014-09-03 8:31 GMT+02:00 Zunayeed Zahir zaryan2905@live.com:

We are undergraduate students and for our undergraduate research work we
are planning to build Openstack Cloud in our Linux machine. For better
security purpose of our Cloud we are planning to develop a Intrusion
Detection Technique using Hidden Markov Model. In order to develop the HMM
we planned to use Hidden Markov Model Toolkit (htk). Will this be a good
decision? Can you suggest us is there any related work done in this area
which might help us accomplishing our research.

Thank You
Zunayeed, Ahsan


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Openstack Heat] create failed, Bad network format: missing 'uuid' (HTTP 400)

I am using following template to create a Heat Stack.

heattemplateversion: 2013-05-23

description: Hot Template to deploy a single server

parameters:
  ImageID:
    type: string
    description: Image ID
  NetID:
    type: string
    description: External Network ID

resources:
  server_0:
    type: OS::Nova::Server
    properties:
      name: "server0"
      image: { get_param: ImageID }
      flavor: "m1.small"
      networks:
      - network: { get_param: NetID }

outputs:
  server0_ip:
    description: IP of the server
    value: { get_attr: [ server_0, first_address ] }

When I create stack. It shows create failed status. BadRequest: Bad network format: missing 'uuid' (HTTP 400) (Request-ID: req-c8360423-e597-495e-9b36-0158177ccd1a). I also attached the snapshot of error.

P.S : I checked and double checked the Network ID and Image ID but error remains. heat logs shows same error heat.engine.resource BadRequest: Bad network format: missing 'uuid' (HTTP 400).

I had also asked at ask.openstack.org but still have no answers._______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Wed, Sep 03, 2014 at 06:54:14AM +0000, khayam.gondal@gmail.com wrote:
I am using following template to create a Heat Stack.

heattemplateversion: 2013-05-23

 description: Hot Template to deploy a single server

 parameters:
   ImageID:
     type: string
     description: Image ID
   NetID:
     type: string
     description: External Network ID

 resources:
   server_0:
     type: OS::Nova::Server
     properties:
       name: "server0"
       image: { get_param: ImageID }
       flavor: "m1.small"
       networks:
       - network: { get_param: NetID }

 outputs:
   server0_ip:
     description: IP of the server
     value: { get_attr: [ server_0, first_address ] }

When I create stack. It shows create failed status. BadRequest: Bad network format: missing 'uuid' (HTTP 400) (Request-ID: req-c8360423-e597-495e-9b36-0158177ccd1a). I also attached the snapshot of error.

I believe you either need to use NetID to create a port, then pass that to
the OS::Nova::Server resource, or pass a fully qualified network definition
in via networks (e.g a map, containing the port):

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server

https://github.com/openstack/heat-templates/blob/master/hot/servers_in_existing_neutron_net.yaml#L38

Steve

[Openstack] Help regarding openstack dashboard

Hello everyone,

I am new here, It is my first email to the list. I successfully completed
installation of openstack on Ubuntu 14.04 LTS and it gave me URLs for
accessing dashboard, admin user password and keystone services as well. But
I am facing following issues:

  1. After I login to openstack dashboard, I got the following error:
    Error: Unable to retrieve usage information
    I am not able to create any instances, volumes or anything there.
  2. I tried following this link
    https://answers.launchpad.net/horizon/+question/181166 to resolve error but
    I found no nova.conf and keystone.conf in my system.

Pls tell me how do I resolve these errors. Any useful link, suggestion or
help will be appreciated. Thank you in advance!

--
Regards,
Sadia


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

By the way I followed instructions given at www.devstack.org and its
installation went smooth.

On Wed, Sep 3, 2014 at 1:51 PM, Sadia Bashir 11msccssbashir@seecs.edu.pk
wrote:

Hello everyone,

I am new here, It is my first email to the list. I successfully completed
installation of openstack on Ubuntu 14.04 LTS and it gave me URLs for
accessing dashboard, admin user password and keystone services as well. But
I am facing following issues:

  1. After I login to openstack dashboard, I got the following error:
    Error: Unable to retrieve usage information
    I am not able to create any instances, volumes or anything there.
  2. I tried following this link
    https://answers.launchpad.net/horizon/+question/181166 to resolve error
    but I found no nova.conf and keystone.conf in my system.

Pls tell me how do I resolve these errors. Any useful link, suggestion or
help will be appreciated. Thank you in advance!

--
Regards,
Sadia

--


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Ceilometer expirer doesn't work?!

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi guys,

i noticed my mongoDB will grow fast with massively samples and meters of
ceilometer. That's normal with a mass on tracking data but i need to
drop samples after a while o held my db clean so i take a look on
ceilometer-expirer (no documentation present).

I've found the option
timetolife =

ceilometer-expirer will check this option and drop all samples

timetolive ... so the theory ...

My database is now 96GB and i test the expirer with different
timetolive values but i can't see any drop in my database.
I've checked it with mongostats to look wich operations are triggered
but i see only inserts, updates and selects.

Is there any trick because i can't find any doc, troubleshoot or bug
report about this issue. I need to clean my db otherwise i can't get
data because my search queries with the ceilometer client times out.

System:
* Ubuntu 12.04
* Havana release
* MongoDB 2.6.4 (replicaSet)

Cheers and Thanks
Heiko


anynines.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUBt2GAAoJELxFogM4ixOFc1oH/RhzhvIcLiO3fjNHhSqd/wa0
ECofYxNaLNgYi7ikHPtYLQ/QU2n9xM6fvY9gU48PJq/ykM713CigmgiRQ5iAlqaL
cpcEqSoT0MogG3F+gfnEkwh7LilxQ8clxz9oyNj6zdoGG1Cjr+9c4t+0NZHgAOHp
V3XQMrmTvPdTSOO+oMeUFkvvNSznEOzxiDkOTXuoJY4Z4HIPOsjhQguf25TRxti8
XYZi2QkgtbxBwbV+KvORGCeIFE7gb9Txm3JmlUYd6xA9nPgntRjk3/KKoAT3E5gU
70ZRlJzH9O6XP+jIjHAcDvLTzRHOO0pArrEPZS4HrPNGM+Nhz3BCp8oQURgVNeA=
=WrcO
-----END PGP SIGNATURE-----


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] External accessing issue

Hi all,

I'm working on the external accessing configuration inside Virtualbox with a
CentOS 6.5 x86_64.

Currently, I find that my Openstack instance can ping the Centos, but cannot
ping outside.

Likewise, my Centos can ping the Openstack instance, but outside hosts
cannot.

Is there anyone encounter same issue with mine?

BTW, my virtualbox network was using Bridge-Adapter.

Thanks for any help.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Xianyi

For Openstack VMs to be accessed from external network,your external
interface should reside in br-ex.CAn you provide me the details of
ovs-vsctl show,ifconfig and route -n on your CentOS 6.5 x86_64.

On Wed, Sep 3, 2014 at 6:38 PM, Xianyi Ye yexianyi@sina.com wrote:

Hi all,

I’m working on the external accessing configuration inside Virtualbox with
a CentOS 6.5 x86_64.

Currently, I find that my Openstack instance can ping the Centos, but
cannot ping outside.

Likewise, my Centos can ping the Openstack instance, but outside hosts
cannot.

Is there anyone encounter same issue with mine?

BTW, my virtualbox network was using Bridge-Adapter.

Thanks for any help.


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Unexpected error in OpenStack Nova

Hi,
After successful installation of both keystone and nova, I tried to execute
the 'nova list' command by the folllowing env variables(My Deployment Model
is single machine deployment):
export OSUSERNAME=admin
export OS
PASSWORD=...
export OSTENANTNAME=service
export OSAUTHURL=http://10.0.0.1:5000

But the following unknown error was occurred:
ERROR: (HTTP
300)

My nova.conf has the following configuration to connect to keystone:
[keystoneauthtoken]
auth
uri = localhost:5000
authhost = 10.0.0.1
auth
port = 35357
authprotocol = http
admin
tenantname = service
admin
user = nova
adminpassword = novapass

How can I solve the problem?
Thanks in advance.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Have you checked the nova-api log?

-Chris

On Wed, Sep 3, 2014 at 10:11 AM, Hossein Zabolzadeh zabolzadeh@gmail.com
wrote:

Hi,
After successful installation of both keystone and nova, I tried to
execute the 'nova list' command by the folllowing env variables(My
Deployment Model is single machine deployment):
export OSUSERNAME=admin
export OS
PASSWORD=...
export OSTENANTNAME=service
export OSAUTHURL=http://10.0.0.1:5000

But the following unknown error was occurred:
ERROR: (HTTP
300)

My nova.conf has the following configuration to connect to keystone:
[keystoneauthtoken]
auth
uri = localhost:5000
authhost = 10.0.0.1
auth
port = 35357
authprotocol = http
admin
tenantname = service
admin
user = nova
adminpassword = novapass

How can I solve the problem?
Thanks in advance.


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Hep regarding attach cirrOS image for glance with openstack dashboard

Hello everyone,

I want to know how to attach cirrOS image for glance with openstack
dashboard: I tried following but it didn't work for me:

Downloaded CirrOS test image from the link:
https://launchpad.net/cirros/+download and then followed commands from its
ReadMe file:

  • get some build dependencies:
    $ sudo apt-get -y install bison flex texinfo build-essential gettext
    ncurses-dev unzip bzr qemu-kvm cvs quilt

  • bzr branch lp:cirros

  • cd cirros

  • download buildroot and setup environment
    $ brver="2012.05"
    $ mkdir -p ../download
    $ ln -snf ../download download
    $ ( cd download && wget
    http://buildroot.uclibc.org/downloads/buildroot-${br_ver}.tar.gz )
    $ tar -xvf download/buildroot-${br
    ver}.tar.gz
    $ ln -snf buildroot-${br_ver} buildroot

  • update ./bin/mkcabundle > src/etc/ssl/certs/ca-certificates.crt
    $ ./bin/mkcabundle > src/etc/ssl/certs/ca-certificates.crt

  • apply any local cirros patches to buildroot
    ( cd buildroot && QUILT_PATCHES=$PWD/../patches-buildroot quilt push -a )

  • download the buildroot sources
    $ make ARCH=i386 br-source

  • Build buildroot for a given arch

    ARCH should be set to 'i386', 'x86_64' or 'arm'

    $ make ARCH=i386 OUT_D=$PWD/output/i386

    This will do a full buildroot build, which will take a while. The output
    that CirrOS is interested in is output/i386/rootfs.tar.
    That file is the full buildroot filesystem, and is used as input for
    subsequent steps here.

  • Download a kernel to use.
    The kernel input to bundle must be in deb format. The ubuntu '-virtual'
    kernel is used as a starting point.

    $ kver="3.2.0-60.91";
    $ burl="https://launchpad.net/ubuntu/+archive/primary/+files/linux-image"
    $ for arch in i386 x8664 arm; do
    xarch=$arch; flav="virtual"
    [ "$arch" = "x86
    64" ] && xarch="amd64";
    [ "$arch" = "arm" ] && xarch="armel" && flav="omap"
    url="$burl-${kver%.}-${flav}${kver}${xarch}.deb"
    wget "$url" -O download/${url##
    /}
    ln -sf ${url##*/} download/kernel-${arch}.deb
    done

  • build disk images using bin/bundle
    $ sudo ./bin/bundle -v output/$ARCH/rootfs.tar download/kernel-$ARCH.deb
    output/$ARCH/images

. All commands worked fine untill the last one for building disk images
using bin/bundle which is giving me following error:

preparing kernel overlay dpkg-deb: error: failed to read archive
`download/kernel-.deb': No such file or directory
failed to extract kernel to /tmp/.bundle.gVsIsm/kernel

Please tell me what steps should I follow to attach a cirrOS with the
openstack dashboard. Thank you!
--
Kind Regards,
Sadia


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] diskconfig issue in icehouse nova

Hi,

I am trying the icehouse nova and horizon. When I tried booting an instance in the horizon, it fails with the KeyError: 'disk_config'. After checking that, I found that the diskConfig was removed in nova (https://review.openstack.org/#/c/62227/), but added in Horizon(https://review.openstack.org/#/c/74911/).
Has anyone seen that? Is there a bug filed for this?

Thanks,
Qiang_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] glance cirros image connection error

Hi everyone,

I have installed devstack on ubuntu14.04 virtual machine. In order to
attach glance cirros image I run following command:

$ glance image-show myCirrosImage

and it gave me following error:

Error finding address for
http://10.0.2.15:9292/v1/images/detail?limit=20&name=myCirrosImage:
HTTPConnectionPool(host='10.0.2.15', port=9292): Max retries exceeded with
url: /v1/images/detail?limit=20&name=myCirrosImage (Caused by : [Errno 111] Connection refused)

Please tell me what steps should I take to resolve this error. Thank you!
--
Kind Regards,
Sadia


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Sadia,

check if glance api service and registry service is running:
Registry-Server has as default port 9191 and api server 9292.

Take a look into the glance log files for more informations.

Cheers
Heiko

On 04.09.2014 06:51, Sadia Bashir wrote:
Hi everyone,

I have installed devstack on ubuntu14.04 virtual machine. In order to
attach glance cirros image I run following command:

$ glance image-show myCirrosImage

and it gave me following error:

Error finding address for
http://10.0.2.15:9292/v1/images/detail?limit=20&name=myCirrosImage:
HTTPConnectionPool(host='10.0.2.15', port=9292): Max retries exceeded with
url: /v1/images/detail?limit=20&name=myCirrosImage (Caused by : [Errno 111] Connection refused)

Please tell me what steps should I take to resolve this error. Thank you!


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


anynines.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUCEBOAAoJELxFogM4ixOFWOQH/R94pW1b4vAEC2zK4YOXif2o
sSZWWiLN7YOOKQp9kama8yx8q43BjTcDkTdqK+HUY7Kv6F3hmI7D9ecu93r2ONh9
GygCB4DeJCJ+jfSDbZWnq4pFm7c53WelVQeIZyX17rfJ15Pt+KIpuBALHn77/1dA
yOb0UK+LgLcAkw7VjjzweDpQAAROwFlySBSIXemmr0hwpRau7uc+d5rsiuW8n1Zu
COd2S0Fw3tYLLKFE1PhQRmD8G/fpDUF6cMGpnwV2GrRoNH0YWhdkZJBHArDSHtUo
IF0+S+BUVpLwTzGbKezfdDBuI8/XZSWa4t6eZwC/Kupql4/9g7QYIn3lhf3mqvQ=
=ECF1
-----END PGP SIGNATURE-----

[Openstack] [LBaaS] Multiple ports per VIP

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi guys,

i try to find a solution how i can add some ports to a single VIP.

Example:

VIP: 10.0.0.100
Pool members: 10.0.0.10,10.0.0.11
Protocol: HTTP and HTTPS (Port 80 and 443)

Is that possible ? Otherwise we need an extra VIP for each port and
thats not practicable.

How do you solve this problem? That's a default use case so there must
be a solution.

System:
* Havana
* Ubuntu

Cheers and Thanks
Heiko


anynines.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUCD8UAAoJELxFogM4ixOFiMAIAJBEmdZcqvwTPo+OpNAhxIjc
TKDa5PDR9/jDXG+h/ELDcfyOM8vJvonv+IOG+zytFEF625syA3JsFwFhZA0uFVvv
f2wt/cu06ET3vKsDiSTHvy/X5dj6lBN7QMVTTC/xwWpaxJTdIGLauB8MGMROmctI
soO3gxGKtGW9B7EbtQVRfi0L21vSG2nqQkUSsBlc3xQEP8aM8ua+bRpVKtylLMym
UIBB5VHc0vLgZWCnDL0x8IXF3hXGg4BBSeaSPZi81kfKGAHKTcj1D6c59O6vLkW8
I8WS6BIGOWq6K9tuSKCC2RS8X8qVWxGKQjebI6RJ9fZN5emJ8Jn5aQfnJOw/RYQ=
=pBNb
-----END PGP SIGNATURE-----


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Eugene,

thanks for your fast response. Do you have a blueprint url for me?
I can observe this blueprint to check when the feature is planned.

Thanks again
Heiko

On 04.09.2014 12:38, Eugene Nikanorov wrote:
Hi Heiko,

Unfortunately with the current state of code it's not possible.

Wait for the new lbaas version where this should be implemented with
'listener' concept.

Thanks,
Eugene.

On Thu, Sep 4, 2014 at 2:29 PM, Heiko Krämer hkraemer@anynines.com
wrote:

>
Hi guys,

i try to find a solution how i can add some ports to a single VIP.

Example:

VIP: 10.0.0.100
Pool members: 10.0.0.10,10.0.0.11
Protocol: HTTP and HTTPS (Port 80 and 443)

Is that possible ? Otherwise we need an extra VIP for each port and
thats not practicable.

How do you solve this problem? That's a default use case so there must
be a solution.

System:
* Havana
* Ubuntu

Cheers and Thanks
Heiko


anynines.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUCEHZAAoJELxFogM4ixOFTREIALibi5gWze3wRXFKvqD5VaWp
BdZiVEOkVs1HkMo14DR2zFxR0RycmVItKMhyQHp88t7b5V/rwc615HKRkCewuk6A
YLHYCx/m5WP500QlDi/ShztA/M2XaweXzUAmLAH7V0R5jPLnVS4YZyQWkm0w6P+7
x1CPiw/IYCd0+7gv7yilq5y/RZjQoQZYJDz+Nm9X+TIOp/mJwq+WY5R8QmM6wOvX
qrY05y+Hhn/GkYQkpXBMBxkiS+ss7I2UUSqvCjQXNTHubzZJwrrXJTqKESc5TvhJ
RbpTZj8JnkSgxM1dXZaVUocJgUMy/17pd+peHCkY3X7EtuIK4sl9WZl1wTbqjRM=
=Ekja
-----END PGP SIGNATURE-----

[Openstack] Template for wordpress

Hi All,

I am trying to implement autoscaling using the template attached for multiple neutron networks.
While creating the server in the webservergroup, it is failing by showing the below error :


webservergroup | c6189de2-d748-44f2-9b60-0d2e708e1d1f | Error: Resource CREATE failed: Error: Resource CREATE failed: BadRequest: Multiple possible networks found

I have attached the template to this mail. Please check it and let me know how to specify a network ID in the websergroup .

Thanks
Kumar


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi ,

You can mention default network inside the template itself or can pass as a parameter while creating the stack.
Here is the example ,

As parameter:

heat stack-create -f /root/rsoft-intall.yaml --parameters="servername=r1m5;keyname=test;flavor=m1.small;image=aaf91618-ed6e-48e4-a6cc-e6f74dc6c82d;privatenetid=25993771-5cdf-4e66-84dd-a6c19d932e0d" wp-stack5

Note: you should give network id not name of the network.

Inside template:

privatenetid:
type: string
description: ID of private network into which servers get deployed

networks:
- port: { getresource: myinstance_port }

myinstanceport:
type: OS::Neutron::Port
properties:
networkid: { getparam: privatenetid }

regards,
subbareddy
persistent systems ltd.

From: yalla.gnan.kumar@accenture.com [mailto:yalla.gnan.kumar@accenture.com]
Sent: Thursday, September 04, 2014 5:50 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Template for wordpress

Hi All,

I am trying to implement autoscaling using the template attached for multiple neutron networks.
While creating the server in the webservergroup, it is failing by showing the below error :


webservergroup | c6189de2-d748-44f2-9b60-0d2e708e1d1f | Error: Resource CREATE failed: Error: Resource CREATE failed: BadRequest: Multiple possible networks found

I have attached the template to this mail. Please check it and let me know how to specify a network ID in the websergroup .

Thanks
Kumar


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.

[Openstack] Zabbix

Guys,

Im curious on who is using Zabbix to monitor their openstack deployment. I know there is quite a bit of value to having the single pane of glass as well as the scripting and API capabilities. I have quite a bit of experience using zabbix to monitor cloud environment.

Thanks,

Mike


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

We (YY INC) have the plans to use zabbix for monitoring the openstack
cloud environment. Can you share more experience with us? thanks.

Im curious on who is using Zabbix to monitor their openstack
deployment. I know there is quite a bit of value to having the single
pane of glass as well as the scripting and API capabilities. I have
quite a bit of experience using zabbix to monitor cloud environment.

Thanks,


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] error whe creating fisrt keystone user

Hi,

when I try to create admin user in keystone,

I got the following message :"The resource could not be found (HTTP 404)"

and the only message I got in keystone.log is the following :

"2014-09-04 15:41:56.432 2406 WARNING
keystone.openstack.common.versionutils [-] Deprecated:
keystone.middleware.core.XmlBodyMiddleware is deprecated as of Icehouse in
favor of support for "application/json" only and may be removed in K."

I'm tring to install openstack on CentOS 6.4, got no error messages till
this one

Thanks in advance

--
Stéphane EVEILLARD
Responsable Système et Développement IBS NETWORK
Coordinateur Version Française Openstack
Membre Association Openstack-fr et APRIL


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Stephane,

Please check if keystone service is running or not?

command: service openstack-keystone status

Regards,
Sushma Korati
sushma_korati@persistent.co.in
Persistent Systems Ltd. | Partners in Innovation | www.persistentsys.com
P Please consider your environmental responsibility: Before printing this e-mail or any other document, ask yourself whether you need a hard copy.


From: Stephane EVEILLARD stephane.eveillard@gmail.com
Sent: Thursday, September 4, 2014 7:17 PM
To: openstack@lists.openstack.org
Subject: [Openstack] error whe creating fisrt keystone user

Hi,

when I try to create admin user in keystone,

I got the following message :"The resource could not be found (HTTP 404)"

and the only message I got in keystone.log is the following :

"2014-09-04 15:41:56.432 2406 WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of Icehouse in favor of support for "application/json" only and may be removed in K."

I'm tring to install openstack on CentOS 6.4, got no error messages till this one

Thanks in advance

--
Stéphane EVEILLARD
Responsable Système et Développement IBS NETWORK
Coordinateur Version Française Openstack
Membre Association Openstack-fr et APRIL

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.

[Openstack] nova resize error

Hi All,

I am trying to resize the tiny flavor to small using Cirros image in Openstack Icehouse.

I can go ahead and resize, it says 100% complete when I try the resize-confirm
command it gives the below error:

nova resize-confirm xxxxxxxxxxxx
ERROR: Cannot 'confirmResize' while instance is in vm_state active (HTTP 409)

Please let me know if we have an option to resize flavor in Icehouse?

Regards,
Raghavendra Lad


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi All,

I am trying to resize the tiny flavor to small using Cirros image in Openstack Icehouse.

I can go ahead and resize, it says 100% complete when I try the resize-confirm
command it gives the below error:

nova resize-confirm xxxxxxxxxxxx
ERROR: Cannot 'confirmResize' while instance is in vm_state active (HTTP 409)

Please let me know if we have an option to resize flavor in Icehouse?

Regards,
Raghavendra Lad

?________________

www.accenture.com


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [OSSN 0023] Keystone logs auth tokens in URLs at the INFO log level

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Keystone logs auth tokens in URLs at the INFO log level


Summary

When a client accesses Keystone using the Identity API version 2, the
tokens will be logged as part of some request URLs. Specifically all
requests to the tokens resource will be logged at the INFO level.

Affected Services / Software

Keystone, Grizzly, Havana, Icehouse, Juno

Discussion

Tokens are used to authorize users when making API requests against
OpenStack services. This allows previously authenticated users to make
API requests without providing their authentication information again,
such as their username and password. This makes them very sensitive
when stored and transmitted on the wire. Ideally tokens should never be
stored on the disk to avoid the possibility of an attacker obtaining
local storage access and reusing them in the system.

Keystone logs the request URLs at the INFO level in order to make the
system operations and support easier. Unfortunately when the tokens
resource is accessed, the URLs include the user's secret token, like
in this case: (single line, wrapped)

  • ---- begin example keystone.log snippet ----
    INFO eventlet.wsgi.server [-] 10.0.0.66 - - [22/Aug/2014 12:39:01]
    "GET /v2.0/tokens/ HTTP/1.1" 403 325 0.006539
  • ---- end example keystone.log snippet ----

Large systems often use remote logging mechanisms, which may use
unencrypted protocols such as syslog/udp. This could lead to
distributing the logfile entries containing tokens in plaintext over
untrusted networks. The target log collection systems may also use
different authorization rules than the local log files, which could
enable access to the tokens by support staff, or to third parties
storing the logs.

Additionally any load balancers and proxies processing the same request
may be logging the URL on their own. Their configuration and solution to
this problem is out of scope of this note and they should be checked
separately.

Version 3 of the Identity API does not pass the tokens in the URLs
anymore. This information is sent using the request headers or POST data
instead.

Recommended Actions

Where possible, users and services interacting with the Keystone service
should be using the Identity API v3 endpoint. If that's not possible,
restricting the Keystone's logging level to WARN will fix the immediate
problem at the cost of removing potentially useful log information. Due
to various ways Keystone may be deployed and configured, interaction of
the 'debug', 'verbose', 'defaultloglevels' and any wsgi server options
should be considered for this change. Keystone deployed via servers
other than eventlet will need their own solution.

If logging of all requests is required, this may be achieved by using a
third-party proxy, like Apache or Nginx with a configuration that does
not write the complete URL into the logs. For example Nginx can be
configured to switch to a customised log format using directive
'access_log' only for requests matching location '/v2.0/tokens/...'.

Contacts / References

This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0023
Original LaunchPad Bug : https://bugs.launchpad.net/keystone/+bug/1348844
OpenStack Security ML : openstack-security@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUCLsBAAoJEJa+6E7Ri+EVIEAIAK1AcGgdeoBhd5GVQjCmalot
jLvYDL9YcEB+mkXczps/TIL9QHVpfH6NTkaIAJzD56Sta6mrJJoSmp4oagResx9z
sUsi6rd1Mv6T/XVLH9/l/10qdl8tdfzbXZh9trzarR3YQywkRgrsjOCXmVP8U6S7
Ynd2qSkFPY2NLwSS9J72bAC4rMwidQfbZUOwzTGqUbtLCC8L8s0tud4snppCjjsq
/XmcY827m2R8HFiDnIXmNGNaE3Do1u6kf3/EWQfYNVKXX2pM+lcbHfJZpGs1sI5E
OIk7oje7o0ymTyRZ1f1KRmqhde+a0FHy5QT9EFYH8J7pBnui6HeSZpVKHrFYijw=
=s1ku
-----END PGP SIGNATURE-----


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Default quota value for fixed_ips is -1, what does it mean?

“nova quota-show” returns fixed_ips = -1. What does it mean?

root@Controller:~# nova quota-show

+-----------------------------+-------+

| Quota | Limit |

+-----------------------------+-------+

| instances | 10 |

| cores | 20 |

| ram | 51200 |

| floating_ips | 10 |

| fixed_ips | -1 <<<????? |

| metadata_items | 128 |

| injected_files | 5 |

| injectedfilecontent_bytes | 10240 |

| injectedfilepath_bytes | 255 |

| key_pairs | 100 |

| security_groups | 10 |

| securitygrouprules | 20 |

+-----------------------------+-------+


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-1 means unlimited.

On 09/04/2014 03:25 PM, Danny Choi (dannchoi) wrote:
“nova quota-show” returns fixed_ips = -1. What does it mean?

root@Controller:~# nova quota-show

+-----------------------------+-------+

| Quota | Limit |

+-----------------------------+-------+

| instances | 10 |

| cores | 20 |

| ram | 51200 |

| floating_ips | 10 |

| fixed_ips | -1 <<<????? |

| metadata_items | 128 |

| injected_files | 5 |

| injectedfilecontent_bytes | 10240 |

| injectedfilepath_bytes | 255 |

| key_pairs | 100 |

| security_groups | 10 |

| securitygrouprules | 20 |

+-----------------------------+-------+


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Swift And Friendly Name URLs

Suppose I have a tenant with a name of 'Company1' and with a tentant ID of
1234567.

Are there any readily available middleware extentions which can take the
'Company1' as the tentant identifier in the URL and map it accordingly.

My containers would all be globally readable, and I would only need this
mapping to take place on the HTTP GET from the world.

So my end users would use this URL.

http://www.domain.com/company1/container/object-path/

The middleware I suppose would then have to map the above to the below:

http://www.domain.com/V1/AUTH_1234567/container/object-path/

Please advise

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] [H][Neutron][IPSecVPN]Cannot tunnel two namespace Routers

Anyone ?

2014-09-03 16:01 GMT+08:00 Germy Lure germy.lure@gmail.com:

Hi,

Thanks for your response.
I just disabled SEG function(deployed in compute nodes).
The ICMP packets even hadn't leave network node. I cannot tcpdump packet
on qr-xx interface.

Can you introduce your demo? Havana or Icehouce? Network node kernel
version? etc.

2014-09-03 12:38 GMT+08:00 Akihiro Motoki amotoki@gmail.com:

Hi,

I did the same in the past for demo, and it worked well.
Does secgroup of VM2 allow connections from VM1?

2014年9月3日水曜日、Germy Luregermy.lure@gmail.comさんは書きました:

Hi Stackers,

Network TOPO like this: VM1(net1)--Router1-------IPSec VPN
tunnel-------Router2--VM2(net2)
If left and right side deploy on different OpenStack environments, it
works well. But in the same environment, Router1 and Router2 are namespace
implement in the same network node. I cannot ping from VM1 to VM2.

In R2(Router2), tcpdump tool tells us that R2 receives ICMP echo request
packets but doesnt send them out.

7837C113-D21D-B211-9630-000000821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-
b5a1-dd987c0231ef tcpdump -i any *
*tcpdump: verbose output suppressed, use -v or -vv for full protocol
decode

listening on any, link-type LINUX_SLL (Linux cooked), capture size
65535 bytes

* 11:50:14.853470 IP 10.10.5.2 > 10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e6), length 132*
11:50:14.853470 IP 128.6.25.2 > 128.6.26.2 http://128.6.26.2: ICMP
echo request, id 44567, seq 486, length 64

* 11:50:15.853475 IP 10.10.5.2 > 10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e7), length 132*
11:50:15.853475 IP 128.6.25.2 > 128.6.26.2 http://128.6.26.2: ICMP
echo request, id 44567, seq 487, length 64

* 11:50:16.853461 IP 10.10.5.2 > 10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e8), length 132*
11:50:16.853461 IP 128.6.25.2 > 128.6.26.2 http://128.6.26.2: ICMP
echo request, id 44567, seq 488, length 64

* 11:50:17.853447 IP 10.10.5.2 > 10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e9), length 132*
11:50:17.853447 IP 128.6.25.2 > 128.6.26.2 http://128.6.26.2: ICMP
echo request, id 44567, seq 489, length 64

* ^C*
8 packets captured
8 packets received by filter
0 packets dropped by kernel

ip addr in R2:

7837C113-D21D-B211-9630-000000821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-b5a1-dd987c0231ef ip addr
187: lo: <LOOPBACK,UP,LOWERUP> mtu 16436 qdisc noqueue state UNKNOWN
group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid
lft forever preferredlft forever
206: qr-4bacb61c-72: <BROADCAST,UP,LOWER
UP> mtu 1500 qdisc noqueue
state UNKNOWN group default
link/ether fa:16:3e:23:10:97 brd ff:ff:ff:ff:ff:ff
inet 128.6.26.1/24 brd 128.6.26.255 scope global qr-4bacb61c-72
inet6 fe80::f816:3eff:fe23:1097/64 scope link
validlft forever preferredlft forever
208: qg-4abd4bb0-21: <BROADCAST,UP,LOWERUP> mtu 1500 qdisc noqueue
state UNKNOWN group default
link/ether fa:16:3e:e6:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.10.5.3/24 brd 10.10.5.255 scope global qg-4abd4bb0-21
inet6 fe80::f816:3eff:fee6:cd1a/64 scope link
valid
lft forever preferred_lft forever

In addition, the kernel counter "/proc/net/snmp" in namespace is
unchanged. These couters do not work well with namespace?

BR,
Germy


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Resize flavor not working

Hi All,

I tried to resize the tiny flavor to small using Cirros image in Openstack Icehouse.

I can go ahead and resize, it says 100% complete when I try the resize-confirm
command it gives the below error:

nova resize-confirm xxxxxxxxxxxx
ERROR: Cannot 'confirmResize' while instance is in vm_state active (HTTP 409)

Let me know if we have an option to resize flavor in Icehouse?

Regards,
Raghavendra Lad


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] service startup order problems, Ubuntu 14.04

I am using icehouse on Ubuntu 14.04 from the canonical repository.

I'm using nova/neutron(ovs+vxlan). My system has 6 blades in a single enclosure.

if i get a power-blip or something like that, the blades start @
different speeds (different generations, bios settings, amount of ram,
that sort of thing).

It never returns to normal without logging in to each blade and
manually restarting services in the right order.

Has anyone else this issue? If so, how do you deal with it?

For example, the upstart config for neutron-plugin-openvswitch-agent
doesn't have a 'wait' or 'prereq' for openvswitch-switch to be up and
running.

The worst part about this is that it usually silently fails. I end up
with all my services running, but not working. E.g. I will end up with
2 interfaces instead of 4 on the host (qvo/qve/qbr/tap... usually
missing the qve). So then the DHCP in my instance will fail.

On the controller, the dnsmasq might come up, but the underling
interface is not there to bind to, and it silently just does nothing
but is running.

So, i can't be the only one to find this very brittle. Is there some
clever solution? Something I am missing? some 'make-it-all-good-now'
script i can run?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi:

Aren't you using any CM server (such as Puppet)?
Thus, you may establish order policies during your services upstarts.

Regards,


JuanFra Rodriguez Cardoso

2014-09-05 17:05 GMT+02:00 Don Waterloo don.waterloo@gmail.com:

I am using icehouse on Ubuntu 14.04 from the canonical repository.

I'm using nova/neutron(ovs+vxlan). My system has 6 blades in a single
enclosure.

if i get a power-blip or something like that, the blades start @
different speeds (different generations, bios settings, amount of ram,
that sort of thing).

It never returns to normal without logging in to each blade and
manually restarting services in the right order.

Has anyone else this issue? If so, how do you deal with it?

For example, the upstart config for neutron-plugin-openvswitch-agent
doesn't have a 'wait' or 'prereq' for openvswitch-switch to be up and
running.

The worst part about this is that it usually silently fails. I end up
with all my services running, but not working. E.g. I will end up with
2 interfaces instead of 4 on the host (qvo/qve/qbr/tap... usually
missing the qve). So then the DHCP in my instance will fail.

On the controller, the dnsmasq might come up, but the underling
interface is not there to bind to, and it silently just does nothing
but is running.

So, i can't be the only one to find this very brittle. Is there some
clever solution? Something I am missing? some 'make-it-all-good-now'
script i can run?


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Cinder] deploying multiple backend

Hi,

I have three nodes say N1, N2 and N3 where I want to deploy LVM and want to
consume for volume provisioning using common backend name say Group-LVM.
Can you help me for following questions:

Do I need to have exactly same cinder.conf for all three instances of
cinder-volume? I think no as it will result in running three cinder-volume
backend instances on each node. In total, it will be 9 instances of
cinder-volume publishing capabilities to cinder-scheduler.

Regards,
Jyoti Ranjan


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Jyoti,

No. Each cinder.conf can be different. It can be a complete separate
configuration(LVM, FC, iSCSI). The scheduler and the volume service doesn't
need to share the same config file either.

On Fri, Sep 5, 2014 at 2:54 PM, Jyoti Ranjan jranjan@gmail.com wrote:

Hi,

I have three nodes say N1, N2 and N3 where I want to deploy LVM and want
to consume for volume provisioning using common backend name say Group-LVM.
Can you help me for following questions:

Do I need to have exactly same cinder.conf for all three instances of
cinder-volume? I think no as it will result in running three cinder-volume
backend instances on each node. In total, it will be 9 instances of
cinder-volume publishing capabilities to cinder-scheduler.

Regards,
Jyoti Ranjan


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] is anyone using zeromq for RPC?

The zmq driver in oslo.messaging, used for internal communication between OpenStack services, has been without a maintainer for a significant period of time. It isn’t actively tested, and it isn’t clear whether or not it works. The Oslo team would like to drop support for it in Kilo, but before we do that we would like to find out if (a) anyone uses it and (b) if any of those people would like to contribute to maintaining it.

Thanks,
Doug


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Fri, Sep 5, 2014 at 2:50 PM, Doug Hellmann doug@doughellmann.com wrote:

The zmq driver in oslo.messaging, used for internal communication between
OpenStack services, has been without a maintainer for a significant period
of time. It isn’t actively tested, and it isn’t clear whether or not it
works. The Oslo team would like to drop support for it in Kilo, but before
we do that we would like to find out if (a) anyone uses it and (b) if any
of those people would like to contribute to maintaining it.

I haven't seen any work on zmq in DevStack for some time either. We'll
follow Oslo in dropping it if that decision is made.

dt

--

Dean Troyer
dtroyer@gmail.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [OSSN 0026] Unrestricted write permission to config files can allow code execution

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Unrestricted write permission to config files can allow code execution


Summary

In numerous places throughout OpenStack projects, variables are read
directly from configuration files and used to construct statements
which are executed with the privileges of the OpenStack service. Since
configuration files are trusted, the input is not checked or sanitized.
If a malicious user is able to write to these files, they may be able
to execute arbitrary code as the OpenStack service.

Affected Services / Software

Nova / All versions, Trove / Juno, possibly others

Discussion

Some OpenStack services rely on operating system commands to perform
certain actions. In some cases these commands are created by appending
input from configuration files to a specified command, and passing the
complete command directly to the operating system shell to execute.
For example:

  • --- begin example example.py snippet ---
    command='ls -al ' + config.DIRECTORY
    subprocess.Popen(command, shell=True)
  • --- end example example.py snippet ---

In this case, if config.DIRECTORY is set to something benign like
'/opt' the code behaves as expected. If, on the other hand, an
attacker is able to set config.DIRECTORY to something malicious such as
'/opt ; rm -rf /etc', the shell will execute both 'ls -al /opt' and 'rm
- -rf /etc'. When called with shell=True, the shell will blindly execute
anything passed to it. Code with the potential for shell injection
vulnerabilities has been identified in the above mentioned services and
versions, but vulnerabilities are possible in other services as well.

Please see the links at the bottom for a couple of examples in Nova and
Trove.

Recommended Actions

Ensure permissions for configuration files across all OpenStack
services are set so that only the owner user can read/write to them.
In cases where other processes or users may have write access to
configuration files, ensure that all settings are sanitized and
validated.

Additionally the principle of least privilege should always be observed
- - files should be protected with the most restrictive permissions
possible. Other serious security issues, such as the exposure of
plaintext credentials, can result from permissions which allow
malicious users to view sensitive data (read access).

Contacts / References

This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0026
Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1343657
OpenStack Security ML : openstack-security@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
Shell Injection:

https://docs.python.org/2/library/subprocess.html#frequently-used-arguments
Additional LaunchPad Bugs:
https://bugs.launchpad.net/trove/+bug/1349939
https://bugs.launchpad.net/nova/+bug/1192971
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJUCho2AAoJEJa+6E7Ri+EV08kH/3bD6R+o63JRin04rVjYcxZD
cerwxS5BPhQ8TgFcWXnzqSrMyru0VlutzmZ3xEn7Zc4x5IdWeWPPDIrgAlnmxAYv
//JS6wSazRDEu5fJvMe6vLKaJ0q5oN7ANqZGpYIKSDQh/M4jaQ85YK+jGH4g5ywk
QJl7GfBX1IQ6V9mOFu/Jm52CmQKWwNnhpSvlhhWZjS3P6CErMMSbIsg6Ec94Kvb3
5Qb2GRMbBYmscxtHU55qRgd2YILF9Jt0SwENE36Y/qdJDYgSU73kIaAuzwUfwUhq
TKc9cnT9gUZiA+UfYfAWgOxC+cyl5HSZe9FqFSnydgFXbXj/RNJ9rb+4yLrnCRM=
=je33
-----END PGP SIGNATURE-----


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] parallel (concurrent) volume attachment to server

Hi All,

I have openstack installed on RHEL6.5 using RDO Quickstart. Everything is now working except for a weird volume attachment issue.

I have a program which launches instance and then starts two threads. One to create a volume (10GB) from snapshot and attach to server, the other to create empty volume(1GB) and attach to server.
The issue is that response from attach volume call reports device names for volumes different from those seen in the server. On server I am using "fdisk -l" to find the size of volume. There have been situations when one volume doesn't appear in server and when both volumes don't appear in server, in spite of API call reporting successful attachment.

Is it OK to try and attach volumes to server in two separate threads concurrently?

Regards,
Adam A R

**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
INFOSYS******** End of Disclaimer ********INFOSYS


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Sat, Sep 06, 2014 at 06:37:31AM +0000, Ambadas Ramanna Adam wrote:
Hi All,


I have openstack installed on RHEL6.5 using RDO Quickstart. Everything is
now working except for a weird volume attachment issue.


I have a program which launches instance and then starts two threads. One
to create a volume (10GB) from snapshot and attach to server, the other to
create empty volume(1GB) and attach to server.

The issue is that response from attach volume call reports device names
for volumes different from those seen in the server. On server I am using
"fdisk -l" to find the size of volume. There have been situations when one
volume doesn't appear in server and when both volumes don't appear in
server, in spite of API call reporting successful attachment.

If you mean the device name in /dev is not what's specified when creating
the volume attachement, than I think that's a known issue - ultimately
cinder just doesn't have much control over how the host OS on the VM
enumerates new devices.

You're probably best using /dev/disk/by-id/virtio- if that is
your issue, and ignoring the mountpoint option to the volume attachment
request.

Steve

[Openstack] Very slow volume deletion even after conf update

Hi All,

Deleting a volume is very slow. Even a 1GB volume takes a long time (3-5 minutes).

I updated nova.conf and cinder.conf to set volumeclear to none and volumeclear_size to 10.
I ensured that the services were restarted. Even after machine reboot the volume deletion is not fast.

Is there anything that I missed?

Regards,
Adam A R

**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
INFOSYS******** End of Disclaimer ********INFOSYS


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] problem in launching instance in icehouse [Virtual Interface creation failed]

Hi,

I am trying to setup openstack with 3 node setup .
i am following
OPENSTACK INSTALLATION GUIDE FOR UBUNTU 12.04/14.04 (LTS) - ICEHOUSE
http://docs.openstack.org/icehouse/install-guide/install/apt/content/index.html
http://docs.openstack.org/icehouse/install-guide/install/apt/content/

the following are the logs in dashboard ..
pls help me ..pls give me some inputs how to debug .

Instance Overview
Info


Name
demo-instance100
ID
3d9b2d74-18e3-4654-a11f-19337754ccad
Status
Error
Availability Zone
nova
Created
Sept. 6, 2014, 12:50 a.m.
Uptime
41 minutes
Fault


Message
Virtual Interface creation failed
Code
500
Details
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290,
in decorated_function return function(self, context, *args, **kwargs) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2102, in
run_instance do_run_instance() File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line
249, in inner return f(*args, **kwargs) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2101, in
do_run_instance legacy_bdm_in_spec) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1225, in
_run_instance notify("error", fault=e) # notify that build failed File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line
68, in __exit__ six.reraise(self.type_, self.value, self.tb) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1209, in
_run_instance instance, image_meta, legacy_bdm_in_spec) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1353, in
_build_instance network_info.wait(do_raise=False) File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line
68, in __exit__ six.reraise(self.type_, self.value, self.tb) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1329, in
_build_instance set_access_ip=set_access_ip) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 393, in
decorated_function return function(self, context, *args, **kwargs) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1741, in
_spawn LOG.exception(_('Instance failed to spawn'), instance=instance) File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line
68, in __exit__ six.reraise(self.type_, self.value, self.tb) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1738, in
_spawn block_device_info) File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2287,
in spawn block_device_info) File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3703,
in _create_domain_and_network raise
exception.VirtualInterfaceCreateException()
Created
Sept. 6, 2014, 12:50 a.m.
*Specs*


Flavor
m1.tiny
RAM
512MB
VCPUs
1 VCPU
Disk
1GB
IP Addresses


Ext-Net
203.0.113.104
Demo-Net1
172.168.1.2
Ext-Net1
192.0.113.102
Security Groups


default
· ALLOW IPv4 from default
· ALLOW IPv4 22/tcp from 0.0.0.0/0
· ALLOW IPv4 to 0.0.0.0/0
· ALLOW IPv6 to ::/0
· ALLOW IPv6 from default
· ALLOW IPv4 icmp from 0.0.0.0/0
Meta


Key Name
ssn-key
Image Name
IMAGELABEL
http://10.203.3.15/horizon/project/images/79072353-a583-4a1a-9f3c-d5f3e1fa4328/
Volumes Attached


Volume
No volumes attached.

thanks,
srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Srinivasreddy,
Can you please paste your nova.conf and ml2_conf.ini?

Regards,
Nitish B.

On Sun, Sep 7, 2014 at 12:12 AM, Srinivasreddy R <
srinivasreddy4390@gmail.com> wrote:

Hi,

I am trying to setup openstack with 3 node setup .
i am following
OPENSTACK INSTALLATION GUIDE FOR UBUNTU 12.04/14.04 (LTS) - ICEHOUSE

http://docs.openstack.org/icehouse/install-guide/install/apt/content/

the following are the logs in dashboard ..
pls help me ..pls give me some inputs how to debug .

Instance Overview
Info


Name
demo-instance100
ID
3d9b2d74-18e3-4654-a11f-19337754ccad
Status
Error
Availability Zone
nova
Created
Sept. 6, 2014, 12:50 a.m.
Uptime
41 minutes
Fault


Message
Virtual Interface creation failed
Code
500
Details
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290,
in decorated_function return function(self, context, *args, **kwargs) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2102, in
run_instance do_run_instance() File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line
249, in inner return f(*args, **kwargs) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2101, in
do_run_instance legacy_bdm_in_spec) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1225, in
_run_instance notify("error", fault=e) # notify that build failed File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line
68, in __exit__ six.reraise(self.type_, self.value, self.tb) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1209, in
_run_instance instance, image_meta, legacy_bdm_in_spec) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1353, in
_build_instance network_info.wait(do_raise=False) File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line
68, in __exit__ six.reraise(self.type_, self.value, self.tb) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1329, in
_build_instance set_access_ip=set_access_ip) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 393, in
decorated_function return function(self, context, *args, **kwargs) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1741, in
_spawn LOG.exception(_('Instance failed to spawn'), instance=instance) File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line
68, in __exit__ six.reraise(self.type_, self.value, self.tb) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1738, in
_spawn block_device_info) File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2287,
in spawn block_device_info) File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3703,
in _create_domain_and_network raise
exception.VirtualInterfaceCreateException()
Created
Sept. 6, 2014, 12:50 a.m.
*Specs*


Flavor
m1.tiny
RAM
512MB
VCPUs
1 VCPU
Disk
1GB
IP Addresses


Ext-Net
203.0.113.104
Demo-Net1
172.168.1.2
Ext-Net1
192.0.113.102
Security Groups


default
· ALLOW IPv4 from default
· ALLOW IPv4 22/tcp from 0.0.0.0/0
· ALLOW IPv4 to 0.0.0.0/0
· ALLOW IPv6 to ::/0
· ALLOW IPv6 from default
· ALLOW IPv4 icmp from 0.0.0.0/0
Meta


Key Name
ssn-key
Image Name
IMAGELABEL

Volumes Attached


Volume
No volumes attached.

thanks,
srinivas.


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Openstack VM not getting ip

Openstack Icehouse VM inside is not getting IP.
It is showing up on dashboard but when you boot and type ifconfig IP is not
coming up.

Any troubleshooting steps or config files to check?

Regards,
Rag hi


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Are you using Nova-network or Neutron ? What is ifconfig output ?

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
--------------

2014-09-07 16:51 GMT+08:00 Raghavendra Lad lad.raghavendra@gmail.com:

Openstack Icehouse VM inside is not getting IP.
It is showing up on dashboard but when you boot and type ifconfig IP is
not coming up.

Any troubleshooting steps or config files to check?

Regards,
Rag hi


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] OpenStack On Debian Jessie

Hello There,
Is there any document about the installation of openstack on debian Jessie?
Every docs are applied to Debian Wheezy.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Hossein,
Debian Jessie is still in testing stage. For production services Wheezy and
Havana is enough stable to use. If you want to install Icehouse on Debian you
can change to unstable branch and add OpenStack backport repository. Visit
below link for more information:
http://docs.openstack.org/icehouse/install-guide/install/apt-debian/content/basics-packages.html
But if you are looking for a Debian based platform to use OpenStack on that, I
offer you to take a look at Ubuntu 14.04 LTS with Icehouse as default in main
repository.

Cheers
Roozbeh

On Monday 08 September 2014 00:52:09 Hossein Zabolzadeh wrote:
Hello There,
Is there any document about the installation of openstack on debian Jessie?
Every docs are applied to Debian Wheezy.
--
Roozbeh Shafiee
Cloud and Virtualization Architect
http://RoozbehShafiee.Com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Heat] trustsdelegatedroles=member?

Hi,

I'm looking at configuring our Heat deployment to use trusts as the
deferred auth method. The requirement to grant each user the
heatstackowner role (or similar) makes things a bit awkward, since
we allow users to grant each other membership within a project and
don't want them to have to worry about specific roles for different
services.

I'm considering just setting:

trustsdelegatedroles=member

But I'm wondering if there are any security implications in doing this
that I haven't considered? Obviously we'd lose the ability to restrict
exactly what Heat can do with this trust, but it seems like this is
still a better alternative than not using trusts at all?

Cheers,
Kieran


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Mon, Sep 08, 2014 at 11:11:07AM +1000, Kieran Spear wrote:
Hi,

I'm looking at configuring our Heat deployment to use trusts as the
deferred auth method. The requirement to grant each user the
heatstackowner role (or similar) makes things a bit awkward, since
we allow users to grant each other membership within a project and
don't want them to have to worry about specific roles for different
services.

I'm considering just setting:

trustsdelegatedroles=member

But I'm wondering if there are any security implications in doing this
that I haven't considered? Obviously we'd lose the ability to restrict
exactly what Heat can do with this trust, but it seems like this is
still a better alternative than not using trusts at all?

As it happens, I posted a patch last week which will make the heat default
exactly what you describe:

https://review.openstack.org/#/c/119415/

The only downsides I'm aware of:

  • Some residual confusion between member and Member roles, also I'm not
    sure if this is necessarily going to be the same in all environments, but
    I'm assuming aligning with the keystone.conf default role name is sane.

  • My testing indicates the user only gets the member role if created via
    the v2 Keystone API : https://bugs.launchpad.net/keystone/+bug/1366133

So essentially I think you should be fine configuring heat like this if you
wish.

Steve

[Openstack] Unabel to delete demo-net, ext-net

Hi,

I have configured two external networks with different configurations . I
am unable to delete the old one .

Please let me how to delete external networks ..

I am attaching the screen shots ..

Thanks,

Srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,
You must delete the respective network port from the corresponding Router or any running Instance (Floating IP or normal IP e.t.c.) in order to delete the network.
If there are active ports on the network then neutron doesn't allow Network Deletion.
Hope it Helps.

Date: Mon, 8 Sep 2014 17:08:15 +0530
From: srinivasreddy4390@gmail.com
To: openstack@lists.openstack.org
Subject: [Openstack] Unabel to delete demo-net, ext-net

Hi,
I have configured two external networks with different configurations . I am unable to delete the old one .
Please let me how to delete external networks ..
I am attaching the screen shots ..

Thanks,
Srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack _______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] virtual machine not able to get ip address

hi,

I have configured 3 node setup with icehouse release. able to launch
to instance . the instance is in active state with no errors ..but it
is not reachable .

It is not able to get ip address .. How can i debug forther ..

below is the Instance console logs ..

Sep 8 09:49:08 cirros kern.notice kernel: [ 0.000000] Linux
version 3.2.0-60-virtual (buildd@toyol) (gcc version 4.6.3
(Ubuntu/Linaro 4.6.3-1ubuntu5) ) #91-Ubuntu SMP Wed Feb 19 04:13:28
UTC 2014 (Ubuntu 3.2.0-60.91-virtual 3.2.55)
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] Command line:
LABEL=cirros-rootfs ro console=tty1 console=ttyS0
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] KERNEL supported cpus:
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] Intel GenuineIntel
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] AMD AuthenticAMD
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] Centaur CentaurHauls
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] BIOS-provided
physical RAM map:
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] BIOS-e820:
0000000000000000 - 000000000009fc00 (usable)
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176020] acpiphp: ACPI
Hot Plug PCI Controller Driver version: 0.5
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176074] acpiphp: Slot
[3] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176084] acpiphp: Slot
[4] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176093] acpiphp: Slot
[5] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176101] acpiphp: Slot
[6] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176110] acpiphp: Slot
[7] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176118] acpiphp: Slot
[8] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176127] acpiphp: Slot
[9] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176136] acpiphp: Slot
[10] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176145] acpiphp: Slot
[11] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176153] acpiphp: Slot
[12] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.181355]
ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker
Sep 8 09:49:08 cirros kern.info kernel: [ 1.182476] 8139cp:
8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004)
Sep 8 09:49:08 cirros kern.info kernel: [ 1.183854] pcnet32:
pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de
Sep 8 09:49:08 cirros kern.info kernel: [ 1.187792] ip_tables: (C)
2000-2006 Netfilter Core Team
Sep 8 09:49:19 cirros kern.debug kernel: [ 12.192125] eth0: no IPv6
routers present
Sep 8 09:52:48 cirros authpriv.info dropbear[295]: Running in background

####### debug end

____ ____ ____
/ / __ ____ ____ / __ \/ __/
/ /
/ // // __// /_/ /\ \
_
//_//_/ /_/ ____/___/
http://cirros-cloud.net

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login:

thanks,
srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,
This happened to me once. Firstly, check neutron logs for any errors in /var/log/neutron/. If you see errors post them here.
Secondly, Verify that neutron dhcp is running. Verify that you have configured OpenStack Data Network correctly, verify VLAN/GRE (whichever applicable) configurations from physical switch and then restart neutron services on control and compute nodes. Restart the VM and it should get the IP. This worked for me.
For further debugging you can use wireshark/tcpdump on the network interfaces/bridges on control and compute node during VM creation to see where the dhcp packet is getting dropped.
RegardsMoiz
Date: Mon, 8 Sep 2014 17:15:28 +0530
From: srinivasreddy4390@gmail.com
To: openstack@lists.openstack.org
Subject: [Openstack] virtual machine not able to get ip address

hi,
I have configured 3 node setup with icehouse release. able to launch to instance . the instance is in active state with no errors ..but it is not reachable .
It is not able to get ip address .. How can i debug forther ..
below is the Instance console logs ..

Sep 8 09:49:08 cirros kern.notice kernel: [ 0.000000] Linux version 3.2.0-60-virtual (buildd@toyol) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #91-Ubuntu SMP Wed Feb 19 04:13:28 UTC 2014 (Ubuntu 3.2.0-60.91-virtual 3.2.55)
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] Command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] KERNEL supported cpus:
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] Intel GenuineIntel
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] AMD AuthenticAMD
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] Centaur CentaurHauls
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] BIOS-provided physical RAM map:
Sep 8 09:49:08 cirros kern.info kernel: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176020] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176074] acpiphp: Slot [3] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176084] acpiphp: Slot [4] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176093] acpiphp: Slot [5] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176101] acpiphp: Slot [6] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176110] acpiphp: Slot [7] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176118] acpiphp: Slot [8] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176127] acpiphp: Slot [9] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176136] acpiphp: Slot [10] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176145] acpiphp: Slot [11] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.176153] acpiphp: Slot [12] registered
Sep 8 09:49:08 cirros kern.info kernel: [ 1.181355] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker
Sep 8 09:49:08 cirros kern.info kernel: [ 1.182476] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004)
Sep 8 09:49:08 cirros kern.info kernel: [ 1.183854] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de
Sep 8 09:49:08 cirros kern.info kernel: [ 1.187792] ip_tables: (C) 2000-2006 Netfilter Core Team
Sep 8 09:49:19 cirros kern.debug kernel: [ 12.192125] eth0: no IPv6 routers present
Sep 8 09:52:48 cirros authpriv.info dropbear[295]: Running in background

####### debug end

____ ____ ____
/ / __ ____ ____ / __ \/ __/
/ /
/ // // __// /_/ /\ \
_
//_//_/ /_/ ____/___/
http://cirros-cloud.net

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login:

thanks,
srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack _______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] database and dashboard out of sync

hi,
i have generated key pair and it is visible by the command
nova keypair-list

but it is not visible in the dashboard .. because of this i am getting
error while launching instance giving error message like
ERROR (Not Found ) : The resource could not be found . (HTTP 404
)(Request-Id ....)
pls help me ..

thanks,
srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 08/09/14 23:15, Srinivasreddy R wrote:
hi,
i have generated key pair and it is visible by the command
nova keypair-list

but it is not visible in the dashboard .. because of this i am getting
error while launching instance giving error message like
ERROR (Not Found ) : The resource could not be found . (HTTP 404
)(Request-Id ....)
pls help me ..

Hi Srinivas,

The dashboard doesn't keep its own database, it only talks to the other
services using their APIs so it shouldn't be able to get out of sync.

Perhaps double-check that you're using the same tenant on the
command-line (from your rc file, probably the OSTENANTNAME environment
variable) and when on the dashboard (the name should be indicated in the
top bar)?

Where do you see the error 404, what's the traceback? You shouldn't be
able to select a key pair you don't have access to at all from the
dashboard, so it shouldn't even show you a 404 and simply launch the
instance with no key pair...

Julie

[Openstack] neutron and nova database are not in sync

hi,

i have added two networks . when trying to launch the image it is giving
the error ..
the neutron and nova database are not in sync ..
how can in proceed further .below are the output of neutron net-list and
nova net-list .

user@user-ThinkCentre-M73:~$ neutron net-list
+--------------------------------------+----------+-----------------------------------------------------+
| id | name |
subnets |
+--------------------------------------+----------+-----------------------------------------------------+
| 8e02bce9-42ac-4823-820c-9d13b847fbaa | demo-net |
1e61f57a-e33a-4664-809b-b522429dace3 192.168.1.0/24 |
| c942e267-7a76-4f2b-97ef-721687de6158 | ext-net |
f0bd4740-8fd4-4a32-bc80-7dff4fa2016a 203.0.113.0/24 |
+--------------------------------------+----------+-----------------------------------------------------+

user@user-ThinkCentre-M73:~$ nova net-list
+----+-------+------+
| ID | Label | CIDR |
+----+-------+------+
+----+-------+------+

thanks,
srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [heat] identity:create_domain failed (403)

Hi guys,

            I have 2 environments that are almost identical but one of them gives me this:

keystoneclient.openstack.common.apiclient.exceptions.Forbidden: You are not authorized to perform the requested action, identity:create_domain. (HTTP 403)

When I try to run:

heat-keystone-setup-domain --stack-domain-admin stack_admin --stack-domain-admin-password $password --stack-user-domain-name heat

The problem is that I'm using the same policy everywhere and one works but the other doesn't. I'm out of ideas!

Any hints?

Dave


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I think Keystone got error itself. Check keystone service and troubleshot.
Last time I got this error is because the Ntp is not configured , then the
keystone got wrong without time sync.

Best Regards!

*Chao Yan--------------**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*

My Weibo:http://weibo.com/herewearenow
--------------

2014-09-09 7:07 GMT+08:00 David Hill david.hill@ubisoft.com:

Hi guys,

            I have 2 environments that are almost identical but one of

them gives me this:

keystoneclient.openstack.common.apiclient.exceptions.Forbidden: You are
not authorized to perform the requested action, identity:create_domain.
(HTTP 403)

When I try to run:

heat-keystone-setup-domain --stack-domain-admin stack_admin
--stack-domain-admin-password $password --stack-user-domain-name heat

The problem is that I’m using the same policy everywhere and one works but
the other doesn’t. I’m out of ideas!

Any hints?

Dave


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] OpenStackClient 0.4.1 released

OpenStackClient 0.4.1 has been released to PyPI. This release consists of
mostly bug fixes and a few new commands.

python-openstackclient can be installed from the following locations:

PyPI: https://pypi.python.org/pypi/python-openstackclient
OpenStack tarball:
http://tarballs.openstack.org/python-openstackclient/python-openstackclient-0.4.1.tar.gz

Release Highlights
* Bug 1319381: remove insecure keyring support
* Bug 1337245: add user password set command
* Bug 1337684: add extension list --compute
* Bug 1337684: add extension list --compute
* add container create and container delete commands
* add initial support for global --timing option (similar to nova CLI)
* complete Python 3 compatibility
* add authentication via --os-trust-id for Identity v3
...and more...

dt

--

Dean Troyer
dtroyer@gmail.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Architecture Compute Beginner Question

Hello all,

I have a fundamental question which, hopefully, someone here can answer.
I will ask by presenting a probably non-realistic example.

Lets say I have 10 compute nodes as separate physical machines with 1 core
each. Without taking into account the ability to overload each core (the
famous 16:1) I would like to know:

Can i use all the 10 cores to create a single instance that takes
advantage of the all the core pool?
Or the fact that each compute node has
its own hypervisor limits the core usage of each instance to one? Are the
whole resources abstracted and seen as one or not?

Thank you in advance.

B/R

Chris


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 09/09/14 17:38, Christos Grivas wrote:
Hello all,

I have a fundamental question which, hopefully, someone here can answer.
I will ask by presenting a probably non-realistic example.

Lets say I have 10 compute nodes as separate physical machines with 1
core each. Without taking into account the ability to overload each core
(the famous 16:1) I would like to know:

Can i use all the 10 cores to create a single instance that takes
advantage of the all the core pool?
Or the fact that each compute node
has its own hypervisor limits the core usage of each instance to one?
Are the whole resources abstracted and seen as one or not?

Thank you in advance.

Hi Chris,

Thanks for looking into OpenStack! Your question has been asked and
answered previously at:

https://ask.openstack.org/en/question/1230/are-cpu-memory-pooled/

https://ask.openstack.org/en/question/32877/multiple-compute-for-one-instance-is-it-possible/

https://ask.openstack.org/en/question/7925/can-vms-use-paralleled-cpus/

https://ask.openstack.org/en/question/6932/cores-from-multiple-physical-hosts-in-vm/

However, there has been some work going on recently around NUMA - that
might be worth looking into if you have such a requirement.

Regards,

Tom

[Openstack] [nova] [policy] hypervisor list

Hi guys,

Do you know if it is possible to give normal users the right to list
hypervisors ?
Apparently setting hypervisor policies in /etc/nova/policy.json is not
enough the right to do so is buried within the Database or API code.

Any workaround ? even if it means that we need to comment some code in
API or Database ?

Kind regards,

--
--
Abbass MAROUNI
VirtualScale


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 09/09/2014 05:41 AM, Abbass MAROUNI wrote:
Hi guys,

Do you know if it is possible to give normal users the right to list
hypervisors ?
Apparently setting hypervisor policies in /etc/nova/policy.json is not
enough the right to do so is buried within the Database or API code.

Any workaround ? even if it means that we need to comment some code in
API or Database ?

I cannot think of any reason why a normal user should see the output of
the os-hypervisors extension. Could you please explain the use case
around this?

Best,
-jay

[Openstack] (no subject)

Hi,

In Horizon dashboard, under Admin-> System Info we have service lists for
Compute and Block Storage. I have filed a blueprint to populate the Swift
services there.
But while going through the implementation details of Compute Services and
Block Storage Services i got to know that the details there is populated
through api calls to python-novaclient and python-cinderclient respectively
which in turn uses "nova service-list" and "cinder service-list" to return
the details.

Whereas no such method is implemented in python-swiftclient to get the list
of services.

So my question is,

1) Do we have plans to include "swift service-list" in swiftclient ?
If yes then I would be filing a blueprint in python-swiftclient to
implement the same coz I require it to populate under the Admin -> System
Info -> Object Storage Services.

2) Is there any other way through which I can get the details of the swift
services (s-proxy, s-account, s-container, s-object) .

As a side note I can also see it has also not been implemented in some
other services like glance and heat. Is it a design decision or the feature
has not been simply impemented.

--

.- <O> -.        .-====-.      ,-------.      .-=<>=-.

/-\'''/-\ / / '' \ \ |,-----.| /----\
|/ o) (o \| | | ')(' | | /,'-----'.\ |/ (')(') \|
\ ._. / \ \ / / {/(') (')_} \ __ /
,>-
,,,-<. >'=jf='< `. _ .' ,'----'.
/ . \ / \ /'-
-'\ / :| \
() . () / \ / \ () :| ()
_-----'____--/ () () ()_______() |___:|____|
___________/ |________| _______/ |_________|

Thanks and Regards

Ashish Chandra


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Implement "swift service-list" in python-swiftclient

Hi,

In Horizon dashboard, under Admin-> System Info we have service lists for
Compute and Block Storage. I have filed a blueprint to populate the Swift
services there.
But while going through the implementation details of Compute Services and
Block Storage Services i got to know that the details there is populated
through api calls to python-novaclient and python-cinderclient respectively
which in turn uses "nova service-list" and "cinder service-list" to return
the details.

Whereas no such method is implemented in python-swiftclient to get the list
of services.

So my question is,

1) Do we have plans to include "swift service-list" in swiftclient ?
If yes then I would be filing a blueprint in python-swiftclient to
implement the same coz I require it to populate under the Admin -> System
Info -> Object Storage Services.

2) Is there any other way through which I can get the details of the swift
services (s-proxy, s-account, s-container, s-object) .

As a side note I can also see it has also not been implemented in some
other services like glance and heat. Is it a design decision or the feature
has not been simply impemented.

--

.- <O> -.        .-====-.      ,-------.      .-=<>=-.

/-\'''/-\ / / '' \ \ |,-----.| /----\
|/ o) (o \| | | ')(' | | /,'-----'.\ |/ (')(') \|
\ ._. / \ \ / / {/(') (')_} \ __ /
,>-
,,,-<. >'=jf='< `. _ .' ,'----'.
/ . \ / \ /'-
-'\ / :| \
() . () / \ / \ () :| ()
_-----'____--/ () () ()_______() |___:|____|
___________/ |________| _______/ |_________|

Thanks and Regards

Ashish Chandra


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On Tue, 9 Sep 2014 15:36:03 +0530
Ashish Chandra mail.ashishchandra@gmail.com wrote:

1) Do we have plans to include "swift service-list" in swiftclient ?
If yes then I would be filing a blueprint in python-swiftclient to
implement the same coz I require it to populate under the Admin -> System
Info -> Object Storage Services.

File a patch in Gerrit and let's have a look. Sounds like a reasonable
idea to me on the face of it. You should be able to get away with
formatting the contents of JSON fetched from "/info", hopefuly
without changes to the server-side code.

-- Pete

[Openstack] Unsuccessful block live-migration with assigned floating ip using nova-network

Hello,

I am facing to following issue:

I am not able to successfuly migrate (block live migration) VM with assigned
floating IP, there is no problem with migration of VM with no floating IP
assigned. We are using nova-network for networking . Migration is iniciated
but it gets stuck in "migrating status" and never ends. Only option is
terminate the instance during its migration which leads to inconsistency
between VM listed by "nova list" and "virsh list" commands. (Terminated
instance appears as running machine under virsh list).

nova-all.log at source host seems to be fine, but nova-all.log at
destination host records following error during the migration:

File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/unitofwork.py", line 63, in setnewvaluestate = attributes.instance_state(newvalue)

AttributeError: 'FixedIP' object has no attribute 'sainstance_state'

System configuration is as follows:

  • Mirantis Openstack 5.0
  • Openstack Icehouse release
  • Nova 2.18.1
  • QEMU 1.2.1
  • Libvirt 0.10.2
  • nova-network

Is there any way how to setup block live migration of floating IP assigned
VM with nova-network network configuration in Icehouse release? 

Thank you for your help

BR,

Vojtech


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Keystone problem

Hi Everyone,

This is my second OpenStack installation. My first installation has been
really easy but this time I'm getting a hard time right at the start with
Keystone, hope you can shed some light on this issue. I must be doing
something wrong as I did the previous installation only a week ago.

Basically i'm stuck at "verify the identity service installation", where
i'm testing the admin user.

I get a 404 from running keystone user-list with the admin user. It's
running fine using OSSERVICETOKEN and OSSERVICEENDPOINT.

I get a "Auth token not in the request header." when running keystone in
debug mode when i run the keystone command.

I can run "keystone token-get" and get the normal output (id, tenantid,
user
id, ...). I get the "Auth token not in the request header." in the
log though.

I have dropped the keystone database and I have repeated the keystone
installation 2 times with no luck.

rpm -qa | grep keystone

python-keystone-2014.1.1-1.el6.noarch
openstack-keystone-2014.1.1-1.el6.noarch
python-keystoneclient-0.9.0-1.el6.noarch

Here is a log of relevant information:

http://pastebin.com/GcgyPUbd

Thanks for you help.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,

You have to unset the service service token and service endpoint.
Use the keystone - -os command to authenticate admin. Create admin associate this user, role and service.
Let me know if you encounter any issues.

Regards,
Raghavendra Lad

From: Louis-Philippe Reid [mailto:louis.philippe.reid@gmail.com]
Sent: Tuesday, September 09, 2014 5:28 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Keystone problem

Hi Everyone,

This is my second OpenStack installation. My first installation has been really easy but this time I'm getting a hard time right at the start with Keystone, hope you can shed some light on this issue. I must be doing something wrong as I did the previous installation only a week ago.

Basically i'm stuck at "verify the identity service installation", where i'm testing the admin user.

I get a 404 from running keystone user-list with the admin user. It's running fine using OSSERVICETOKEN and OSSERVICEENDPOINT.

I get a "Auth token not in the request header." when running keystone in debug mode when i run the keystone command.

I can run "keystone token-get" and get the normal output (id, tenantid, userid, ...). I get the "Auth token not in the request header." in the log though.

I have dropped the keystone database and I have repeated the keystone installation 2 times with no luck.

rpm -qa | grep keystone

python-keystone-2014.1.1-1.el6.noarch
openstack-keystone-2014.1.1-1.el6.noarch
python-keystoneclient-0.9.0-1.el6.noarch

Here is a log of relevant information:

http://pastebin.com/GcgyPUbd

Thanks for you help.


This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.


www.accenture.com


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [neutron] Missing openflow rules in br-tun

Hello,

I'm using openstack with neutron (Havana release) and ML2 plugin with
l2population and openvswitch agent on hosts. I have tenant networks with
vxlan tunnels made and generally everything works well but I have small
issue few times. On host there was missing openflow rule for traffic
incoming from different host. After restart ovs_agent it configure all
openflow rules fine and network in instances was ok.
Do You notice same problem? Maybe there is some patch or solution fir
such problem? Thanks in advance for any help.

--
Pozdrawiam
Sławek Kapłoński
slawek@kaplonski.pl


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] unable to retrieve back the networks table in nova database after truncation

hi,
i am trying to setup three node setup unfortunately we have truncated
networks table in nova database from mysql .
and because of that we are facing one more error in launching instance .
please help us to restore the networks tables for nova database .

we have tried uninstalling all the nova packages for controller node and
reinstalled .
even though still the networks tables for nova database is empty ..
because of this we are not able to launch the instance .

user@user-ThinkCentre-M73:~$ source admin-openrc.sh
user@user-ThinkCentre-M73:~$ neutron net-create ext-net --shared
--router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| adminstateup | True |
| id | 5ab6836d-b04a-4d45-88ca-4ca7c3e15db5 |
| name | ext-net |
| provider:networktype | gre |
| provider:physical
network | |
| provider:segmentationid | 1 |
| router:external | True |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant
id | 92ae5ba7cdcf41a7ac82d3c25dd209e6 |
+---------------------------+--------------------------------------+
user@user-ThinkCentre-M73:~$ neutron subnet-create ext-net --name
ext-subnet --allocation-pool start=203.0.113.101,end=203.0.113.200
--disable-dhcp --gateway 203.0.113.1 203.0.113.0/24
Created a new subnet:
+------------------+----------------------------------------------------+
| Field | Value |
+------------------+----------------------------------------------------+
| allocationpools | {"start": "203.0.113.101", "end": "203.0.113.200"} |
| cidr | 203.0.113.0/24 |
| dns
nameservers | |
| enabledhcp | False |
| gateway
ip | 203.0.113.1 |
| hostroutes | |
| id | 3c2d650e-51ae-4924-989d-d881e2ebfe1b |
| ip
version | 4 |
| name | ext-subnet |
| networkid | 5ab6836d-b04a-4d45-88ca-4ca7c3e15db5 |
| tenant
id | 92ae5ba7cdcf41a7ac82d3c25dd209e6 |
+------------------+----------------------------------------------------+
user@user-ThinkCentre-M73:~$ neutron net-list
+--------------------------------------+----------+-----------------------------------------------------+
| id | name |
subnets |
+--------------------------------------+----------+-----------------------------------------------------+
| 5ab6836d-b04a-4d45-88ca-4ca7c3e15db5 | ext-net |
3c2d650e-51ae-4924-989d-d881e2ebfe1b 203.0.113.0/24 |
| a78571fd-e9fc-40ea-a488-e460b20e267a | demo-net |
0c66fbc3-edb3-4c46-b725-76443de8e570 192.168.1.0/24 |
+--------------------------------------+----------+-----------------------------------------------------+
user@user-ThinkCentre-M73:~$ nova net-list
+----+-------+------+
| ID | Label | CIDR |
+----+-------+------+
+----+-------+------+

user@user-ThinkCentre-M73:~$ source demo-openrc.sh
user@user-ThinkCentre-M73:~$ nova net-list
+----+-------+------+
| ID | Label | CIDR |
+----+-------+------+
+----+-------+------+


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] KVM shows several processes for a single VM on the host

Hallo all,

I have an OpenStack setup based on Havana with several compute nodes. When I instantiate a virtual machine with 1 or more virtual cores, in the process list of the compute node, I am able to see more than 1 process associated to the same VM. The process list that I refer is the process which can be seen by running either top or htop.

Can I know the significance of other processes which are associated to the same VM? Are these child processes?

Regards,
Krishnaprasad


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 09/09/2014 07:58 AM, Narayanan, Krishnaprasad wrote:
Hallo all,

I have an OpenStack setup based on Havana with several compute nodes.
When I instantiate a virtual machine with 1 or more virtual cores, in
the process list of the compute node, I am able to see more than 1
process associated to the same VM. The process list that I refer is the
process which can be seen by running either top or htop.

Can I know the significance of other processes which are associated to
the same VM? Are these child processes?

Are they separate processes or separate threads? Separate threads would
be expected, but separate processes would not be.

Chris

[Openstack] SWIFT - Write Quorum

If I configure Swift to use 4 replicas across two regions(two replicas per
region), is it possible to only list a newly ingested object if it has
written at least twice? The goal is to only list a new object only if it
has a presence in each region.

west coast
region 1 - zone 1
region 1 - zone 2

east coast
region 2 - zone 1( 3?)
region 2 - zone 2( 4?)

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I'm not sure what the question is.

If you are looking to have a successful response after it's written twice in a cluster with 4 replicas, no. Swift's quorum calculation is (replicas DIV 2 + 1). This means that for 4 replicas, you have a quorum size of 3. What I would suggest you look in to is the write_affinity setting so that you can do a full-quorum (at least) write to the local region and then asynchronously replicate to the other region. See http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters and https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/.

If you are looking to ensure that there is at least one replica in each region, then yes. The quorum size of three (see above) will ensure that, without any write_affinity settings, you'll have at least one replica in each region and two in another before the client gets a 2xx success response code to the PUT request.

--John

On Sep 9, 2014, at 6:59 AM, Brent Troge brenttroge2016@gmail.com wrote:

If I configure Swift to use 4 replicas across two regions(two replicas per region), is it possible to only list a newly ingested object if it has written at least twice? The goal is to only list a new object only if it has a presence in each region.

west coast
region 1 - zone 1
region 1 - zone 2

east coast
region 2 - zone 1( 3?)
region 2 - zone 2( 4?)

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] 3 processes for a VM which has 1 VCPU

Hallo all,

Let me make my earlier email more precise. For a VM that has 1 virtual core, I am able to see 3 processes in total, probably 1 process per virtual core. Can I kindly know the purpose of other two processes?

Regards,
Krishnaprasad
From: Narayanan, Krishnaprasad
Sent: Dienstag, 9. September 2014 15:58
To: 'openstack@lists.openstack.org'
Subject: KVM shows several processes for a single VM on the host

Hallo all,

I have an OpenStack setup based on Havana with several compute nodes. When I instantiate a virtual machine with 1 or more virtual cores, in the process list of the compute node, I am able to see more than 1 process associated to the same VM. The process list that I refer is the process which can be seen by running either top or htop.

Can I know the significance of other processes which are associated to the same VM? Are these child processes?

Regards,
Krishnaprasad


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [nova] Live migration problem

Good day, everything_

I use OpenStack Icehouse 2014.1.2, QEMU (for tests) and cinder with LIO as iscsi target system in the Alt Linux (Russian Linux distr), Openstack packages from Fedora.

I have a problem with live migration.

Steps to reproduce:
1. Create VM with boot from disk in one node (c3).
2. Migrate this VM to another node (c2). Live migration is OK.
3. Again migrate this VM to c3 host and receive error "No portal found".

In second migration Nova not executes logout iscsi-session in source host c2 and tries to connect iscsi-target to dest host c3.

Nova compute logs (sorry for a lot of keystone PKI tokens in the logs):
с3.success.migrate -> http://pastebin.archlinux.fr/581017
c2.success.migrate -> http://pastebin.archlinux.fr/581027
c2.fail.migrate -> http://pastebin.archlinux.fr/581029
c3.fail.migrate -> http://pastebin.archlinux.fr/581040

Version of software (both of compute nodes):
python-module-novaclient-2.17.0-alt1
python-module-nova-2014.1.2-alt2
openstack-nova-common-2014.1.2-alt2
openstack-nova-compute-2014.1.2-alt2
libvirt-1.0.4-alt2

Help me, please.

WBR.
Ainur Shakirov._______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Openstack Nat Problem

Hi

I have a question about networking on openstack. I try to create 2 network segment using vm ubuntu. It has 2 networking cards 192.168.1.0/24 and 192.168.2.0/24. I configured nat rules using iptables but it doesnt work as i expected. When i try to ping from 192.168.2.0/24 to 192.168.1.0/24 network, ubuntu server gets packet but cannot forward the other site ?

I googled but cannot find any solution. So how can i configure nat properly on ubuntu vm ? I configured the sysctl for ip_forwarding and iptables rules. 

I can verify that  ping packet from Machine2   reaches the Machine1 but Machine2 ping reply doesn't come to Machine1. The reply packet come to Ubuntu's interface 192.168.1.5 but the nat rule does not work.

 

Below is the rules i did. eth0 external network, eth1 internal network. 

echo 1 > /proc/sys/net/ipv4/ip_forward

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT

iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

Best regards

[Openstack] floating ip (icehouse)

Hello,

I have a controller, a network and a compute node setup. When I spin an
instance, it gets its ip and it can reach the internet. Then I assign a
floating ip. From dashboard or cli I can see the ip's assoication with the
instanace but I can't ping this floating ip externally or from within the
instance. Where do I look?. Distro: ubuntu 14.04 with mls.

Thanks
Paras.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,
Is this ip listed in your network? Kindly confirm that. Also, maybe the
firewall is configured to drop those packets. Check it up with $traceroute
and post the output. That's a start!

Regards,
Nitish B.

On Tue, Sep 9, 2014 at 9:17 PM, Paras pradhan pradhanparas@gmail.com
wrote:

Hello,

I have a controller, a network and a compute node setup. When I spin an
instance, it gets its ip and it can reach the internet. Then I assign a
floating ip. From dashboard or cli I can see the ip's assoication with the
instanace but I can't ping this floating ip externally or from within the
instance. Where do I look?. Distro: ubuntu 14.04 with mls.

Thanks
Paras.


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Mirantis FUEL

Hello Everyone,

I am doing some research about Fuel, but I have a big doubt and seems there
is not a clear answer in the documents I have been reading. I dont want to
waste my time doing an installation to discover after that Fuel only
supports The OPenstack Mirantis Distribution.

So, the question is, Can I use Fuel and not use the Openstack Distribution
of Mirantis?

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

In a word, no. My experience has been that Fuel really is a Mirantis
product. Their distro, their support. Folks used to be able to install
RedHat RDO if they had access to a satellite server but no longer.

But that's how a lot of packages are these days. Ubuntu JujJu, SUSE Cloud
v4. RedHat RDO, VMware VIO, Each supported by their respective vendors.
Other vendors package Openstack with their own optimizations (i.e. Piston,
Cloudscaling, etc).

Are you looking for a product that orchestrates a vanilla install of
Openstack (not tied to a specific distro)? if so, your best bet is probably
using something like Puppet, Chef, Salt, Ansible, etc.

Adam Lawson
CEO, Principal Architect

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Tue, Sep 9, 2014 at 11:45 AM, Guillermo Alvarado <
guillermoalvarado89@gmail.com> wrote:

Hello Everyone,

I am doing some research about Fuel, but I have a big doubt and seems
there is not a clear answer in the documents I have been reading. I dont
want to waste my time doing an installation to discover after that Fuel
only supports The OPenstack Mirantis Distribution.

So, the question is, Can I use Fuel and not use the Openstack Distribution
of Mirantis?

Thanks!


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] OpenStack DefCore Community Meetings Wed 6pm PT & Thurs 8 am PT

OpenStack Community,

I'd like to invite you to join the DefCore committee in a discussion/review of the Havana Designated Sections as recommended by the DefCore committee. We've scheduled two meetings to ensure global coverage. The meetings will cover the same material, you do not need to attend both.

We would like your feedback on the designated sections as part of the review process (here's the latest: https://etherpad.openstack.org/p/DefCoreLighthouse.6 ).

What are designated sections? Before we talk about the sections, we'll review the overall process and there's review links below.

Call-in Information: https://etherpad.openstack.org/p/DefCoreLighthouse.7
Agenda:
* Review DefCore status and history (15 minutes + questions)
* Review & Discuss Designated Sections proposal (remainder, ~45 minutes)

Background Posts

Rob


Rob Hirschfeld
Dell, Sr. Distinguished Cloud Solution Architect
Please note, I am based in the CENTRAL (-6) time zone


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [Neutron] Multiple allocation pools in a single subnet for floating ips

Hi,

How can I setup neutron so that it supports a range of fragmented ips
within a subnet? Version: Icehouse

For example I have the following floating ips available for use:
192.168.1.10 192.168.1.15
but 192.168.1.11-14 can't be used as floating ips. With nova-network we
could add them 1 by 1.

I tried to do this:
neutron subnet-update c58d4e69-d614-4d05-91e5-95b5cc48b670
--allocation-pools start=192.168.1.10,end=192.168.1.10 --allocation-pools
start=192.168.1.15,end=192.168.1.15

The environment is a lab environment where they gave a limited amount of
floating ips to use. As example I used a private ip space, but in the
actual lab environment I am using a public internet routable ipv4 addresses
as floating ip, hence the restriction and fragmentation of the ip range.

Regards,
Sam


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hello,

I don't know if it is possible to add such fragmented allocation pool to one
subnet but You can add two subnets to Your network and maybe that will solve
Your problem?


Pozdrawiam
Sławek Kapłoński
slawek@kaplonski.pl

Dnia środa, 10 września 2014 14:26:15 Sam Stoelinga pisze:

Hi,

How can I setup neutron so that it supports a range of fragmented ips
within a subnet? Version: Icehouse

For example I have the following floating ips available for use:
192.168.1.10 192.168.1.15
but 192.168.1.11-14 can't be used as floating ips. With nova-network we
could add them 1 by 1.

I tried to do this:
neutron subnet-update c58d4e69-d614-4d05-91e5-95b5cc48b670
--allocation-pools start=192.168.1.10,end=192.168.1.10 --allocation-pools
start=192.168.1.15,end=192.168.1.15

The environment is a lab environment where they gave a limited amount of
floating ips to use. As example I used a private ip space, but in the
actual lab environment I am using a public internet routable ipv4 addresses
as floating ip, hence the restriction and fragmentation of the ip range.

Regards,
Sam


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Help regarding communication error while accessing images from openstack dashboard

Hi everyone,

After completing openstack installation from the link: http://devstack.org/
I was able to access images list from openstack dashboard but after I
restarted my machine I am getting following error:
CommunicationError at /admin/images/images/CommunicationError at
/admin/images/images/

Error finding address for
http://10.0.2.15:9292/v1/images/detail?sort_key=created_at&sort_dir=desc&limit=21&is_public=None:
HTTPConnectionPool(host='10.0.2.15', port=9292): Max retries exceeded
with url: /v1/images/detail?sortkey=createdat&sortdir=desc&limit=21&ispublic=None
(Caused by : [Errno 111] Connection refused)

Request Method: GET Request URL:
http://127.0.0.1/admin/images/images/ Django
Version: 1.6.7 Exception Type: CommunicationError Exception Value:

Error finding address for
http://10.0.2.15:9292/v1/images/detail?sort_key=created_at&sort_dir=desc&limit=21&is_public=None:
HTTPConnectionPool(host='10.0.2.15', port=9292): Max retries exceeded
with url: /v1/images/detail?sortkey=createdat&sortdir=desc&limit=21&ispublic=None
(Caused by : [Errno 111] Connection refused)

Exception Location: /opt/stack/python-glanceclient/glanceclient/common/http.py
in _request, line 208 Python Executable: /usr/bin/python Python Version:
2.7.6 Python Path:

['/opt/stack/horizon/openstackdashboard/wsgi/../..',
'/opt/stack/python-keystoneclient',
'/opt/stack/python-glanceclient',
'/opt/stack/python-cinderclient',
'/opt/stack/python-novaclient',
'/opt/stack/python-swiftclient',
'/opt/stack/python-neutronclient',
'/opt/stack/python-heatclient',
'/opt/stack/python-openstackclient',
'/opt/stack/keystone',
'/opt/stack/glance',
'/opt/stack/cinder',
'/opt/stack/nova',
'/opt/stack/horizon',
'/opt/stack/heat',
'/opt/stack/tempest',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86
64-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages/PILcompat',
'/usr/lib/python2.7/dist-packages/gtk-2.0',
'/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
'/opt/stack/horizon/openstack_dashboard']

Server time: Wed, 10 Sep 2014 06:40:56 +0000

Moreover, When I run '$nova list' or any other command from terminal, I get
the following communication error:

ERROR (ConnectionError): HTTPConnectionPool(host='10.0.2.15', port=8774):
Max
retries exceeded with url:
/v2/4cb43a0df9974c9492a1501eb1b8cb56/servers/detail
(Caused by : [Errno 111] Connection refused)

I tried to check the glance api service and registry service at
http://127.0.0.1:9292 and http://127.0.0.1:9191 but got no services running
there.
Please tell me how do I fix this error? Thank you very much in advance for
any assistance or guideline.
--
Kind Regards,
Sadia Bashir


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi,
After rebooting your machine, did you run ./stack.sh again?
From: 11msccssbashir@seecs.edu.pk
Date: Wed, 10 Sep 2014 12:22:41 +0500
To: openstack@lists.openstack.org
Subject: [Openstack] Help regarding communication error while accessing images from openstack dashboard

Hi everyone,

After completing openstack installation from the link: http://devstack.org/ I was able to access images list from openstack dashboard but after I restarted my machine I am getting following error: CommunicationError at /admin/images/images/CommunicationError at /admin/images/images/
Error finding address for http://10.0.2.15:9292/v1/images/detail?sort_key=created_at&sort_dir=desc&limit=21&is_public=None: HTTPConnectionPool(host='10.0.2.15', port=9292): Max retries exceeded with url: /v1/images/detail?sortkey=createdat&sortdir=desc&limit=21&ispublic=None (Caused by : [Errno 111] Connection refused)

  Request Method:
  GET

  Request URL:
  http://127.0.0.1/admin/images/images/

  Django Version:
  1.6.7

  Exception Type:
  CommunicationError

  Exception Value:
  Error finding address for http://10.0.2.15:9292/v1/images/detail?sort_key=created_at&sort_dir=desc&limit=21&is_public=None: HTTPConnectionPool(host='10.0.2.15', port=9292): Max retries exceeded with url: /v1/images/detail?sort_key=created_at&sort_dir=desc&limit=21&is_public=None (Caused by <class 'socket.error'>: [Errno 111] Connection refused)

  Exception Location:
  /opt/stack/python-glanceclient/glanceclient/common/http.py in _request, line 208

  Python Executable:
  /usr/bin/python

  Python Version:
  2.7.6

  Python Path:
  ['/opt/stack/horizon/openstack_dashboard/wsgi/../..',

'/opt/stack/python-keystoneclient',
'/opt/stack/python-glanceclient',
'/opt/stack/python-cinderclient',
'/opt/stack/python-novaclient',
'/opt/stack/python-swiftclient',
'/opt/stack/python-neutronclient',
'/opt/stack/python-heatclient',
'/opt/stack/python-openstackclient',
'/opt/stack/keystone',
'/opt/stack/glance',
'/opt/stack/cinder',
'/opt/stack/nova',
'/opt/stack/horizon',
'/opt/stack/heat',
'/opt/stack/tempest',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x8664-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages/PILcompat',
'/usr/lib/python2.7/dist-packages/gtk-2.0',
'/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
'/opt/stack/horizon/openstack
dashboard']

  Server time:
  Wed, 10 Sep 2014 06:40:56 +0000Moreover, When I run '$nova list' or any other command from terminal, I get the following communication error: 

ERROR (ConnectionError): HTTPConnectionPool(host='10.0.2.15', port=8774): Max
retries exceeded with url: /v2/4cb43a0df9974c9492a1501eb1b8cb56/servers/detail
(Caused by : [Errno 111] Connection refused)
I tried to check the glance api service and registry service at http://127.0.0.1:9292 and http://127.0.0.1:9191 but got no services running there.

Please tell me how do I fix this error? Thank you very much in advance for any assistance or guideline.
--
Kind Regards,
Sadia Bashir


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack _______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Keystone in multiple datacenters

Does openstack current release support running of multiple redundant
instances of keystone in multiple data centers with the user database
synchronized across data centers. Is there any document that describes
what functionality is available.

BR..

VA


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 09/10/2014 09:49 AM, Vinay Avasthi wrote:
Does openstack current release support running of multiple redundant
instances of keystone in multiple data centers with the user database
synchronized across data centers. Is there any document that describes
what functionality is available.

Yes, this works perfectly fine. At AT&T, we used MySQL Galera
replication to provide multi-master, synchronous replication of our
identity database and image registry database across the WAN in >7
datacenters. Since both the identity database and the image registry
have relatively low write-to-read ratios, Galera is a good fit for WAN
replication here.

The trick is to adjust up your WS-REP certification timeout values
slightly so that it can tolerate the additional WAN latency a little better.

Specifically, adjust the wsrepprovideroptions setting in your
wsrep.cnf to increase the various timeouts a bit above their defaults:

wsrepprovideroptions="evs.keepaliveperiod = PT3S;
evs.inactive
checkperiod = PT10S; evs.suspecttimeout = PT30S;
evs.inactivetimeout = PT1M; evs.consensustimeout = PT1M;"

If you can use Percona XtraDB Cluster 5.6, you can also check out the
new WAN segment functionality that helps in this type of setup:

http://www.percona.com/blog/2013/12/19/automatic-replication-relaying-galera-3/

Oh, and don't use the SQL token store! You do NOT want to be replicating
Keystone tokens from one DC to another, as the write volume is insane on
even a medium-sized deployment. Instead, use the memcache token driver
in Keystone and have each DC handle its token handling for users hitting
the Horizon endpoints in each DC. Note that you give up being able to
use a Keystone token across multiple datacenters, but I think the
performance benefits and stability offered by this solution is worth it.

Feel free to check out my slides about managing different data stores in
your OpenStack deployments here:

http://bit.ly/openstack-data-storage

Best,
-jay

[Openstack] SWIFT AND ROOT DIRECTORY

When calling swift hosted media files through flash or silverlight, the
player makes a call to the root directory for clientaccesspolicy.xml or
crossdomain.xml.

For example, if my flash player calls:

http://192.168.1.1:8080/v1/AUTH_xxx/media/index.m3u8

The player will then look for:

http://192.168.1.1:8080/crossdomain.xml

I assume I would have to host the proxy server via apache httpd, and make
the needed adjustments.

Thoughts?

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Actually, it's even easier than that. Swift provides, out of the box, a crossdomain middleware so that you can return the appropriate domain-wide policy for your content.

See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.crossdomain

--John

On Sep 10, 2014, at 7:34 AM, Brent Troge brenttroge2016@gmail.com wrote:

When calling swift hosted media files through flash or silverlight, the player makes a call to the root directory for clientaccesspolicy.xml or crossdomain.xml.

For example, if my flash player calls:

http://192.168.1.1:8080/v1/AUTH_xxx/media/index.m3u8

The player will then look for:

http://192.168.1.1:8080/crossdomain.xml

I assume I would have to host the proxy server via apache httpd, and make the needed adjustments.

Thoughts?

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] ICehouse[Neutron][ML2][VLAN] - nova boot failed

Hi ,

When I am configuring ML2 plugin with vlan based network , nova boot is failing with following error ,where as if I change the driver from ML2 to OVS , things are working fine .
Please suggest.

Here is the ml2 configuration:

[ml2]
typedrivers = vlan
tenant
networktypes = vlan
mechanism drivers = openvswitch
[ml2
typevlan]
network
vlanranges = physnet1:1:4094
[ovs]
tenant
networktype = vlan
network
vlanranges = physnet1:1:4094
bridge
mappings = physnet1:eth1
[securitygroup]
firewalldriver = neutron.agent.linux.iptablesfirewall.OVSHybridIptablesFirewallDriver
enablesecuritygroup = True

/etc/neutron/neutron.conf:
coreplugin = ml2
service
plugins = router
allowoverlappingips = True

logs:

root@build-server:/etc/init# grep -ir "241e4c21-5604-4a1b-a782-f44128fbd0b3" /var/log/nova/*
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 ERROR nova.compute.manager [req-f4550d18-b37c-4dc6-9544-a9e8b2e38821 5249dfc7f6c34fe68f98cfe60f82b331 22ad3f28f8f24b7fb14f17af44aa55f0] [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] Instance failed to spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] Traceback (most recent call last):
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720, in spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] block
deviceinfo)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2250, in spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] write
todisk=True)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3431, in to
xml
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] diskinfo, rescue, blockdeviceinfo)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3247, in get
guestconfig
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] flavor)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 384, in get
config
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] ("Unexpected viftype=%s") % viftype)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] NovaException: Unexpected vif
type=bindingfailed
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3]
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.877 21941 ERROR nova.virt.libvirt.driver [-] [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] During wait destroy, instance disappeared.
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 ERROR nova.compute.manager [req-f4550d18-b37c-4dc6-9544-a9e8b2e38821 5249dfc7f6c34fe68f98cfe60f82b331 22ad3f28f8f24b7fb14f17af44aa55f0] [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] Error: Unexpected vif
type=bindingfailed
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] Traceback (most recent call last):
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1311, in _build
instance
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] setaccessip=setaccessip)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 399, in decorated_function
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] return function(self, context, *args, **kwargs)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1723, in _spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] LOG.exception(_('Instance failed to spawn'), instance=instance)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] six.reraise(self.type_, self.value, self.tb)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720, in _spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] block_device_info)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2250, in spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] write_to_disk=True)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3431, in to_xml
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] disk_info, rescue, block_device_info)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3247, in get_guest_config
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] flavor)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 384, in get_config
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] _("Unexpected vif_type=%s") % vif_type)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] NovaException: Unexpected vif_type=binding_failed
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3]
/var/log/nova/nova-compute.log:2014-09-11 02:50:44.135 23339 ERROR nova.virt.libvirt.driver [-] [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] During wait destroy, instance disappeared.

Regards,
Subbareddy
Persistent systems ltd.

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Shouldn't this be physnet1:br-eth1

bridge_mappings = physnet1:eth1

Create a bridge named br-eth1 with eth1 as port on both controller and compute nodes
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1

From: Chinasubbareddy M [mailto:chinasubbareddy_m@persistent.co.in]
Sent: Wednesday, September 10, 2014 9:27 AM
To: openstack@lists.openstack.org
Cc: Ayyalaraju Koundinya; Ram Nalluri
Subject: [Openstack] ICehouse[Neutron][ML2][VLAN] - nova boot failed

Hi ,

When I am configuring ML2 plugin with vlan based network , nova boot is failing with following error ,where as if I change the driver from ML2 to OVS , things are working fine .
Please suggest.

Here is the ml2 configuration:

[ml2]
typedrivers = vlan
tenant
networktypes = vlan
mechanism drivers = openvswitch
[ml2
typevlan]
network
vlanranges = physnet1:1:4094
[ovs]
tenant
networktype = vlan
network
vlanranges = physnet1:1:4094
bridge
mappings = physnet1:eth1
[securitygroup]
firewalldriver = neutron.agent.linux.iptablesfirewall.OVSHybridIptablesFirewallDriver
enablesecuritygroup = True

/etc/neutron/neutron.conf:
coreplugin = ml2
service
plugins = router
allowoverlappingips = True

logs:

root@build-server:/etc/init# grep -ir "241e4c21-5604-4a1b-a782-f44128fbd0b3" /var/log/nova/*
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 ERROR nova.compute.manager [req-f4550d18-b37c-4dc6-9544-a9e8b2e38821 5249dfc7f6c34fe68f98cfe60f82b331 22ad3f28f8f24b7fb14f17af44aa55f0] [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] Instance failed to spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] Traceback (most recent call last):
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720, in spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] block
deviceinfo)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2250, in spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] write
todisk=True)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3431, in to
xml
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] diskinfo, rescue, blockdeviceinfo)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3247, in get
guestconfig
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] flavor)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 384, in get
config
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] ("Unexpected viftype=%s") % viftype)
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] NovaException: Unexpected vif
type=bindingfailed
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.053 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3]
/var/log/nova/nova-compute.log:2014-09-11 02:41:23.877 21941 ERROR nova.virt.libvirt.driver [-] [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] During wait destroy, instance disappeared.
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 ERROR nova.compute.manager [req-f4550d18-b37c-4dc6-9544-a9e8b2e38821 5249dfc7f6c34fe68f98cfe60f82b331 22ad3f28f8f24b7fb14f17af44aa55f0] [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] Error: Unexpected vif
type=bindingfailed
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] Traceback (most recent call last):
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1311, in _build
instance
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] setaccessip=setaccessip)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 399, in decorated_function
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] return function(self, context, *args, **kwargs)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1723, in _spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] LOG.exception(_('Instance failed to spawn'), instance=instance)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] six.reraise(self.type_, self.value, self.tb)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720, in _spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] block_device_info)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2250, in spawn
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] write_to_disk=True)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3431, in to_xml
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] disk_info, rescue, block_device_info)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3247, in get_guest_config
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] flavor)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 384, in get_config
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] _("Unexpected vif_type=%s") % vif_type)
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] NovaException: Unexpected vif_type=binding_failed
/var/log/nova/nova-compute.log:2014-09-11 02:41:24.275 21941 TRACE nova.compute.manager [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3]
/var/log/nova/nova-compute.log:2014-09-11 02:50:44.135 23339 ERROR nova.virt.libvirt.driver [-] [instance: 241e4c21-5604-4a1b-a782-f44128fbd0b3] During wait destroy, instance disappeared.

Regards,
Subbareddy
Persistent systems ltd.

DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [cinder] Anyone using the XenAPI Driver?

The driver [1] for two releases now has been missing a minimum
requirement of 'extend_volume' [2].

The maintainer has been unresponsive in the last two releases as well.
To be fair with the requirements we set with other drivers, and to
maintain Cinder's goals, the Cinder team would like to drop it in the
K release.

Before we go about doing this, I would like to see who is using it,
and if anyone who is would like to contribute to it. Thanks!

[1] - https://github.com/openstack/cinder/tree/master/cinder/volume/drivers/xenapi
[2] - http://docs.openstack.org/developer/cinder/devref/drivers.html#minimum-features

-Mike Perez


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Resize fails; Failed to computetaskmigrate_server: No valid host was found.

OpenStack version: Icehouse

Issue: fail to resize

Setup: 3 nodes, namely Controller, Compute and Network; with nova-compute running solely on Compute node.

Steps to reproduce:
1. Create an instance using image cirrus 0.3.2.
2. Verify instance is running: nova list
3. Resize the instance: nova resize cirrus 2 —poll
4. Verify the resize: nova list
5. Note that the server status is ACTIVE instead of VERIFY_RESIZE.
6. Check nova-scheduler.log at Controller node and note the error.

Snippet of the nova-scheduler.log:

2014-09-10 14:12:46.550 7488 WARNING nova.scheduler.utils [req-63c12687-9f68-4501-a164-002c0f5fba7d 77bdd3f911744f72af7038d40d722439 73a095bf078443c9b340d871deaabcc3] Failed to computetaskmigrate_server: No valid host was found.

Traceback (most recent call last):

File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, in inner

return func(*args, **kwargs)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 298, in select_destinations

filter_properties)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py", line 140, in selectdestinations

raise exception.NoValidHost(reason='')

NoValidHost: No valid host was found.

2014-09-10 14:12:46.552 7488 WARNING nova.scheduler.utils [req-63c12687-9f68-4501-a164-002c0f5fba7d 77bdd3f911744f72af7038d40d722439 73a095bf078443c9b340d871deaabcc3] [instance: 5960d4fe-8905-49f0-9cd9-1d09ded90810] Setting instance to ACTIVE state.

2014-09-10 14:12:46.645 7488 WARNING nova.conductor.manager [req-63c12687-9f68-4501-a164-002c0f5fba7d 77bdd3f911744f72af7038d40d722439 73a095bf078443c9b340d871deaabcc3] [instance: 5960d4fe-8905-49f0-9cd9-1d09ded90810] No valid host found for cold migrate

NOTE: I tried setting "allowresizetosamehost=true” and “allowmigratetosamehost=true” in all nova.conf files and
restart nova services, but still saw the same issue.

Thanks,
Danny


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 10 Sep 2014, at 19:44, Danny Choi (dannchoi) dannchoi@cisco.com wrote:

OpenStack version: Icehouse

Issue: fail to resize

Setup: 3 nodes, namely Controller, Compute and Network; with nova-compute running solely on Compute node.

Steps to reproduce:
1. Create an instance using image cirrus 0.3.2.
2. Verify instance is running: nova list
3. Resize the instance: nova resize cirrus 2 —poll
4. Verify the resize: nova list
5. Note that the server status is ACTIVE instead of VERIFY_RESIZE.
6. Check nova-scheduler.log at Controller node and note the error.

Snippet of the nova-scheduler.log:

2014-09-10 14:12:46.550 7488 WARNING nova.scheduler.utils [req-63c12687-9f68-4501-a164-002c0f5fba7d 77bdd3f911744f72af7038d40d722439 73a095bf078443c9b340d871deaabcc3] Failed to computetaskmigrate_server: No valid host was found.
Traceback (most recent call last):

File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, in inner
return func(*args, **kwargs)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 298, in selectdestinations
filter
properties)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py", line 140, in selectdestinations
raise exception.NoValidHost(reason='')

NoValidHost: No valid host was found.

2014-09-10 14:12:46.552 7488 WARNING nova.scheduler.utils [req-63c12687-9f68-4501-a164-002c0f5fba7d 77bdd3f911744f72af7038d40d722439 73a095bf078443c9b340d871deaabcc3] [instance: 5960d4fe-8905-49f0-9cd9-1d09ded90810] Setting instance to ACTIVE state.
2014-09-10 14:12:46.645 7488 WARNING nova.conductor.manager [req-63c12687-9f68-4501-a164-002c0f5fba7d 77bdd3f911744f72af7038d40d722439 73a095bf078443c9b340d871deaabcc3] [instance: 5960d4fe-8905-49f0-9cd9-1d09ded90810] No valid host found for cold migrate

NOTE: I tried setting "allowresizetosamehost=true” and “allowmigratetosamehost=true” in all nova.conf files and
restart nova services, but still saw the same issue.

Could you double-check if the two settings are in the [DEFAULT] section of nova.conf?

Ramon


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Neutron networking help RHEL 6.5 openstack icehouse RDO repo.

Hello All,

I have been trying to duplicate this guide here mostly. https://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks/ The provider network part is useful to me but the gre tunnels and and dhcp are not needed for us. We want to assign the addresses via a central build server rather than having openstack manage it on a site to site basis.

The VLANs are trunked to the port of the management server and the compute/network node. The VLAN stuff is verified working through manual testing. When I tcpdump eth0 I also see lldp announcements showing the right information in regards to vlans and so on.

I run a standalone management server. I am running the network and compute node on the second server and I modified necessary settings using the installation guide for icehouse on docs.openstack.org. I then used some of the settings above to create the OVS settings.

I have one network card per chassis, a 10Gb Intel adapter.

eth0 has no addressing

I have a bridge called br-eth0 with the addressing from the management network assigned to it.

Here are my settings from /etc/neutron/plugin.ini (ml2_conf.ini). These are consistent across the management and compute/network host. Everything seems to be communicating with its databases fine.

I have tried commenting out the [ml2] stuff and using just the [ovs] stuff to no success. 106 and 400 are the vlans I configure here and the network is labelled openstack. We have a third management network but traffic on that is all untagged.

Here are the parameters set below currently.

[ml2]
typedrivers = vlan
tenant
networktypes = vlan
mechanism
drivers = openvswitch

[ml2typevlan]
networkvlanranges = openstack:106:106,openstack:400:400

[ovs]
enabletunneling = False
integration
bridge = br-int
bridgemappings = openstack:br-eth0
network
vlan_ranges = openstack:106:106,openstack:400:400

Neutron net-list shows the networks created when I run the provider commands linked in the article above. I added a gateway to the address space though as the cirrus image wouldn't launch without a gateway defined to the network range.

| id | name | subnets |
+--------------------------------------+------+--------------------------------------------------------+
| 0d6d0bb3-4b99-4d3f-8d67-fa3c9cd9ce64 | VRF | 2601be8f-483b-4823-ae7d-1475ce23b3a5 10.169.2.0/24 |
| 61741e90-09d0-406d-ad37-32712264c62e | GRT | 1a170428-4a47-4dfc-a8d4-d98216b93050 182.255.xxx.xxx/28 |

When I check the web interface the correct segmentation IDs are assigned to each network

For example in VRF
Provider Network
Network Type: vlan
Physical Network: openstack
Segmentation ID: 106

Neutron subnet-list shows subnets created.

+--------------------------------------+----------+-------------------+------------------------------------------------------+
| id | name | cidr | allocationpools |
+--------------------------------------+----------+-------------------+------------------------------------------------------+
| 2601be8f-483b-4823-ae7d-1475ce23b3a5 | NAP
VRF | 10.169.2.0/24 | {"start": "10.169.2.2", "end": "10.169.2.254"} |
| 1a170428-4a47-4dfc-a8d4-d98216b93050 | NAP_GRT | 182.255.xxx.xxx/28 | {"start": "182.255.xxx.xxx", "end": "182.255.xxx.xxx"} |

When I attempt to launch a VM with networking the following message goes to /var/log/nova/api.log

014-09-11 05:42:36.169 14602 INFO oslo.messaging.drivers.implqpid [-] Connected to AMQP server on qld-gdpt-mgmt.aarnet.net.au:5672
2014-09-11 06:07:19.792 14613 ERROR nova.api.openstack [req-6a2ea530-1f70-407c-a3bf-255a80f06c1e c6099a503e274f4db41ea044da056112 b533b1d0bc754286936d28edfd0e94e3] Caught error: Timed out waiting for a reply to message ID 4146a0d1ee334cfd9f22587140c49928
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack Traceback (most recent call last):
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/api/openstack/init.py", line 125, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return req.getresponse(self.application)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/webob/request.py", line 1296, in send
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack application, catch
excinfo=False)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/webob/request.py", line 1260, in call
application
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack appiter = application(self.environ, startresponse)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/webob/dec.py", line 144, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return resp(environ, startresponse)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth
token.py", line 679, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return self.app(env, startresponse)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/webob/dec.py", line 144, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return resp(environ, start
response)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/webob/dec.py", line 144, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return resp(environ, startresponse)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/routes/middleware.py", line 131, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack response = self.app(environ, start
response)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/webob/dec.py", line 144, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return resp(environ, startresponse)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/webob/dec.py", line 130, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack resp = self.call
func(req, *args, **self.kwargs)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/webob/dec.py", line 195, in call_func
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return self.func(req, *args, **kwargs)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 917, in __call__
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack content_type, body, accept)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 983, in _process_stack
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack action_result = self.dispatch(meth, request, action_args)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 1070, in dispatch
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return method(req=request, **action_args)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/api/openstack/compute/servers.py", line 956, in create
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack legacy_bdm=legacy_bdm)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/hooks.py", line 103, in inner
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack rv = f(*args, **kwargs)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 1341, in create
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack legacy_bdm=legacy_bdm)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 968, in _create_instance
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack max_count)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 739, in _validate_and_build_base_options
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack requested_networks, max_count)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 463, in _check_requested_networks
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack max_count)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/network/api.py", line 95, in wrapped
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return func(self, context, *args, **kwargs)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/network/api.py", line 420, in validate_networks
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack requested_networks)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 225, in validate_networks
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return self.client.call(ctxt, 'validate_networks', networks=networks)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 361, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return self.prepare().call(ctxt, method, **kwargs)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in call
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack wait_for_reply=True, timeout=timeout)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in _send
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack timeout=timeout)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 412, in send
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack return self._send(target, ctxt, message, wait_for_reply, timeout)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 403, in _send
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack result = self._waiter.wait(msg_id, timeout)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 267, in wait
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack reply, ending = self._poll_connection(msg_id, timeout)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 217, in _poll_connection
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack % msg_id)
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack MessagingTimeout: Timed out waiting for a reply to message ID 4146a0d1ee334cfd9f22587140c49928
2014-09-11 06:07:19.792 14613 TRACE nova.api.openstack

When I launch the vm build without networking the build succeeds and creates its own network port in each subnet even when not specifying the -nic and -net-id stuff with nova boot command . Neither respond to ping and when you login to the vm via you get nowhere even if you assign address as the tagging applied is random. The web interface shows the port up and active in the subnet and the device owner is compute:None .

Neutron port-list
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| id | name | macaddress | fixedips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| 511c60ce-d2dd-4f07-a735-cd9ca4ecbb50 | | fa:16:3e:71:6a:6a | {"subnetid": "1a170428-4a47-4dfc-a8d4-d98216b93050", "ipaddress": "182.255.xxx.xxx"} |
| 79f595cd-d686-4779-b48e-a64bef964dd3 | | fa:16:3e:cd:ec:88 | {"subnetid": "2601be8f-483b-4823-ae7d-1475ce23b3a5", "ipaddress": "10.169.2.3"} |

When I do an ovs-vsctl show I see the new tap created for the vms created

ovs-vsctl show
68eae405-27ae-40f1-b9ab-e751a06987e9
Bridge br-int
failmode: secure
Port br-int
Interface br-int
type: internal
Port "qvo79f595cd-d6"
tag: 2
Interface "qvo79f595cd-d6"
Port "int-br-eth0"
Interface "int-br-eth0"
Port "qvo511c60ce-d2"
tag: 1
Interface "qvo511c60ce-d2"
Bridge "br-eth0"
Port "br-eth0"
Interface "br-eth0"
type: internal
Port "eth0"
Interface "eth0"
Port "phy-br-eth0"
Interface "phy-br-eth0"
ovs
version: "1.11.0"

Is there some step I am missing in the plugin.ini (ml2_conf.ini) file that is stopping me from creating vms with network? It can't be nova as the vm launches when no network is specified? I have had a second admin check my configuration files and it followed the guide to a letter aside from the modified network settings we need to suit out setup which is why I am thinking this is where I have gone wrong.

Thanks,
Alex


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] neutron-usage-audit and ml2

Hi

I try to use the neutron-usage-audit script to generate audit samples in
ceilometer.

But I get an error:

' ''Ml2Plugin' object has no attribute 'get_routers'

I get the similiar error for 'get_floatingips'

Any hints, where I have to dive into ?

I use Debian with Icehouse, neutron configured with ml2 and openvswitch.

Regards

Benedikt


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] error connection refused while creating glance image

hi ,

i am creating openstack 3 node setup . following then installation guide .

http://docs.openstack.org/icehouse/install-guide/install/apt/content/

i am getting connection refused error . someone pls help me .

glance image-create --name "cirros-0.3.2-x8664" --disk-format qcow2
--container-format bare --is-public True --progress <
cirros-0.3.2-x86
64-disk.img
Error finding address for http://controller:9292/v1/images: [Errno 111]
Connection refused

i have configured fo v2 but in error log it is v1.

thanks,
srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

hi Erno ,
Thanks alot ..
i have set the OSIMAGEURL=2 .

After that i have run glance image-create . Now i am getting different
error .
pls help me .

root@user-ThinkCentre-M73:/tmp/images# glance image-create --name
"cirros-0.3.2-x8664" --disk-format qcow2 --container-format bare
--is-public True --progress < cirros-0.3.2-x86
64-disk.img
usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT]
[--no-ssl-compression] [-f] [--os-image-url OSIMAGEURL]
[--os-image-api-version OSIMAGEAPIVERSION] [-k]
[--os-cert OS
CERT] [--cert-file OSCERT] [--os-key OSKEY]
[--key-file OSKEY] [--os-cacert ]
[--ca-file OS
CACERT] [--os-username OSUSERNAME]
[--os-user-id OS
USERID]
[--os-user-domain-id OS
USERDOMAINID]
[--os-user-domain-name OSUSERDOMAINNAME]
[--os-project-id OS
PROJECTID]
[--os-project-name OS
PROJECTNAME]
[--os-project-domain-id OS
PROJECTDOMAINID]
[--os-project-domain-name OSPROJECTDOMAINNAME]
[--os-password OS
PASSWORD] [--os-tenant-id OSTENANTID]
[--os-tenant-name OSTENANTNAME] [--os-auth-url OSAUTHURL]
[--os-region-name OSREGIONNAME]
[--os-auth-token OSAUTHTOKEN]
[--os-service-type OSSERVICETYPE]
[--os-endpoint-type OSENDPOINTTYPE]
...
glance: error: unrecognized arguments: --name --disk-format qcow2
--container-format bare --is-public True --progress

Thanks,
srinivas.

Hi srinivas,

If you issue just command glance, it will give you help text. The 4th from
the bottom is –os-image-api-version which defaults to 1. I’m pretty sure
you have not set this for yourself.

  • Erno

From: Srinivasreddy R [mailto:srinivasreddy4390@gmail.com]
Sent: 11 September 2014 11:06
To: openstack@lists.openstack.org
Subject: [Openstack] error connection refused while creating glance image

hi ,

i am creating openstack 3 node setup . following then installation guide .

http://docs.openstack.org/icehouse/install-guide/install/apt/content/

i am getting connection refused error . someone pls help me .

glance image-create --name "cirros-0.3.2-x8664" --disk-format qcow2
--container-format bare --is-public True --progress <
cirros-0.3.2-x86
64-disk.img
Error finding address for http://controller:9292/v1/images: [Errno 111]
Connection refused

i have configured fo v2 but in error log it is v1.

thanks,
srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Unable to store personality file

Hello,
I am trying to launch a Ubuntu VM in Havana and trying to set the
hostname in the /etc/hostname file using the personality file while booting
the VM. However, this always fails with this exception - NovaException:
partition search unsupported with nbd. Is there a way to workaround this?
Thanks.
--
Regards,
Nagaraj


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Hitachi HUS-110 FC volume driver

Hi guys,
I read Cinder provides iSCSI driver for for HUS (Hitachi Unified Storage)
http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.htmlarrays
such as, HUS-110, HUS-130, and HUS-150.

http://docs.openstack.org/icehouse/config-reference/content/hds-volume-driver.html

But I want to use HUS-110 + Fibre Channel providing volumes for VM.

Have FC driver for Hitachi HUS-110 or any solution to use FC for providing
volumes directly to VM not through Cinder node??

Thanks and regards


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] Openstack nova instance delete issue

Folks,

I have question, I am seeing a behavior, while deleting nova instance it returns status as an Error and fails to delete itself.
So scenario, where I see this behavior is as follow:

  1. Create a nova VM instance.
  2. Create a cinder volume
  3. Attach this volume to nova vm instance and after that wait for volume to go to ‘In-use’ state.
  4. Mount the created partition to VM
  5. Unmount the created partition
  6. Detach the volume from VM instance and after that wait for volume to go to ‘Available’ state
  7. Delete the volume
  8. Delete the nova VM instance ==> It fails at this step, because status of server instance is set to Error

This happens one out of two times I run above test, so its not consistently seen. So wondering if someone has seen issue like this ?
BTW if I swap step 7) and 8), in other words delete nova VM instance before deleting the volume, then I do not see this issue and instance gets deleted correctly. So I am suspecting that either detach doesn’t completely detaches the volume from instance or deleting the instances some how triggers volume to detach indirectly.
So any idea, why and how this can happen ?

Thanks,
Harshil


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] History of fixed IPs in neutron

Hello,

Do You know is there any way to save all history of each fixed IP assigned to
port (and instance). For example: I have external network with some subnet.
User is creating instance A connected to this instance so I have in history
that IP x.x.x.x was assinged to instance A, then he delete instance so in
history there is info that IP is not assigned to instance. And so on.
I'm using openstack havana version and neutron with ML2 and I don't see such
possibility but maybe I'm wrong :)


Best regards
Sławek Kapłoński
slawek@kaplonski.pl


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] big data on openstack

Hello everybody,

I'm the author of Ferry, a new open-source project that automates
launching big data stacks in the cloud. For those familiar with
Sahara, it's similar in scope but has additional support for Spark,
Cassandra, and GlusterFS/OpenMPI (and of course Hadoop). I don't want
to spam folks too much, but I thought the community might find some
use in our recent OpenStack backend.

http://blog.opencore.io/announcements/2014/08/01/openstack/

If anybody is interested in installing and testing out the software,
feel free to shoot me an email.

Cheers,
--James

Web: http://ferry.opencore.io
Email: jlh@opencore.io
Twitter: @opencoreio


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I like what you're doing and wonder if you have thought about integrating
your efforts with the Sahara project to improve/advance support of these
technologies instead of developing a parallel effort? Good on you for
working towards fixing something you find lacking my friend.

Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Thu, Sep 11, 2014 at 5:28 PM, James Horey jhorey@gmail.com wrote:

Hello everybody,

I'm the author of Ferry, a new open-source project that automates
launching big data stacks in the cloud. For those familiar with
Sahara, it's similar in scope but has additional support for Spark,
Cassandra, and GlusterFS/OpenMPI (and of course Hadoop). I don't want
to spam folks too much, but I thought the community might find some
use in our recent OpenStack backend.

http://blog.opencore.io/announcements/2014/08/01/openstack/

If anybody is interested in installing and testing out the software,
feel free to shoot me an email.

Cheers,
--James

Web: http://ferry.opencore.io
Email: jlh@opencore.io
Twitter: @opencoreio


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] nova not able to find networks

hi,
1. i have created a router
neutron router-create demo-router

  1. attached the router to demo tenant demo-subnet
    neutron router-interface-add demo-router demo-subnet

  2. attached the router to external network
    neutron router-gateway-set demo-router ext-net

4.
root@user-ThinkCentre-M73:/home/user# neutron net-list
+--------------------------------------+----------+-----------------------------------------------------+
| id | name |
subnets |
+--------------------------------------+----------+-----------------------------------------------------+
| d81390bf-281c-40b6-a611-6a193da88963 | ext-net |
fc4b6684-b0f9-4983-94ed-7d075880e921 203.0.113.0/24 |
| e1bd057f-59c3-426e-9c2e-f3e2953a392e | demo-net |
e6bea0e8-a27d-45f1-b95e-461b611f9a42 192.168.1.0/24 |
+--------------------------------------+----------+-----------------------------------------------------+

5.
root@user-ThinkCentre-M73:/home/user# nova net-list
+----+-------+------+
| ID | Label | CIDR |
+----+-------+------+
+----+-------+------+

we are able to see neutron ne-list but not able to see nova net-list .
while we have created successfully net list for nova as well as neutron .
neutron can able to see both ext-net and demo-net .
but nova not able to see networks .

i have checked with the admin, demo user .

please help me how to proceed further .

we are following 3 node setup installation .
http://docs.openstack.org/icehouse/install-guide/install/apt/content/

thanks,
srinivas.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Hi Srinivas,

Can you please check if nova.conf has proper neutron details.

networkapiclass = nova.network.neutronv2.api.API

[neutron]
servicemetadataproxy = True
url = http://1.x.x.x:9696
regionname = RegionOne
admin
tenantname = service
auth
strategy = keystone
adminauthurl = http://1.x.x.x:35357/v2.0
adminpassword = Openstack1
admin
username = neutron

If yes, then try restarting the nova services and check.

Regards,
Sushma Korati
sushma_korati@persistent.co.in |
Persistent Systems Ltd. | Partners in Innovation | www.persistentsys.com
P Please consider your environmental responsibility: Before printing this e-mail or any other document, ask yourself whether you need a hard copy.


From: Srinivasreddy R srinivasreddy4390@gmail.com
Sent: Friday, September 12, 2014 11:33 AM
To: openstack@lists.openstack.org
Subject: [Openstack] nova not able to find networks

hi,
1. i have created a router
neutron router-create demo-router

  1. attached the router to demo tenant demo-subnet
    neutron router-interface-add demo-router demo-subnet

  2. attached the router to external network
    neutron router-gateway-set demo-router ext-net

4.
root@user-ThinkCentre-M73:/home/user# neutron net-list
+--------------------------------------+----------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+-----------------------------------------------------+
| d81390bf-281c-40b6-a611-6a193da88963 | ext-net | fc4b6684-b0f9-4983-94ed-7d075880e921 203.0.113.0/24 |
| e1bd057f-59c3-426e-9c2e-f3e2953a392e | demo-net | e6bea0e8-a27d-45f1-b95e-461b611f9a42 192.168.1.0/24 |
+--------------------------------------+----------+-----------------------------------------------------+

5.
root@user-ThinkCentre-M73:/home/user# nova net-list
+----+-------+------+
| ID | Label | CIDR |
+----+-------+------+
+----+-------+------+

we are able to see neutron ne-list but not able to see nova net-list .
while we have created successfully net list for nova as well as neutron .
neutron can able to see both ext-net and demo-net .
but nova not able to see networks .

i have checked with the admin, demo user .

please help me how to proceed further .

we are following 3 node setup installation .
http://docs.openstack.org/icehouse/install-guide/install/apt/content/

thanks,
srinivas.

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.

[Openstack] [openstack][neutron] tox -e py27 is not working in the latest neutron code

Hi All,

I am trying to run unit test cases in neutron code using "tox -e py27" but it is not working.
I commented out pbr in requirements.txt to avoid issues related to pbr but still I am getting following errors.
I removed .tox folder and tried again but still same issue. Please help me here.

Console logs:


ERROR: invocation failed, logfile: /root/mirji/neutron/.tox/py27/log/py27-1.log
ERROR: actionid=py27
msg=getenv
cmdargs=[local('/root/mirji/neutron/.tox/py27/bin/pip'), 'install', '-U', '-r/root/mirji/neutron/requirements.txt', '-r/root/mirji/neutron/test-requirements.txt']
env={'PYTHONIOENCODING': 'utf8', 'httpproxy': 'http://web-proxy.rose.hp.com:8088/', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'LOGNAME': 'root', 'USER': 'root', 'PATH': '/root/mirji/neutron/.tox/py27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOME': '/root/mirji/neutron/.tox/py27/tmp/pseudo-home', 'LANG': 'enUS.UTF-8', 'TERM': 'screen', 'SHELL': '/bin/bash', 'noproxy': 'localhost,127.0.0.1/8,15.,16.,.hp.com', 'httpsproxy': 'http://web-proxy.rose.hp.com:8088/', 'PYTHONHASHSEED': '0', 'SUDOUSER': 'sdn', 'TOXINDEXURL': 'http://pypi.openstack.org/openstack', 'USERNAME': 'root', 'PIPINDEXURL': 'http://pypi.openstack.org/openstack', 'SUDOUID': '1000', 'VIRTUALENV': '/root/mirji/neutron/.tox/py27', '': '/usr/local/bin/tox', 'SUDOCOMMAND': '/bin/bash', 'SUDOGID': '1000', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'OLDPWD': '/root/mirji', 'SHLVL': '1', 'PWD': '/root/mirji/neutron/neutron/tests/unit', 'MAIL': '/var/mail/root', 'LSCOLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:.tar=01;31:.tgz=01;31:.arj=01;31:.taz=01;31:.lzh=01;31:.lzma=01;31:.tlz=01;31:.txz=01;31:.zip=01;31:.z=01;31:.Z=01;31:.dz=01;31:.gz=01;31:.lz=01;31:.xz=01;31:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.jpg=01;35:.jpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.axv=01;35:.anx=01;35:.ogv=01;35:.ogx=01;35:.aac=00;36:.au=00;36:.flac=00;36:.mid=00;36:.midi=00;36:.mka=00;36:.mp3=00;36:.mpc=00;36:.ogg=00;36:.ra=00;36:.wav=00;36:.axa=00;36:.oga=00;36:.spx=00;36:*.xspf=00;36:'}
Downloading/unpacking Paste (from -r /root/mirji/neutron/requirements.txt (line 3))
Could not find any downloads that satisfy the requirement Paste (from -r /root/mirji/neutron/requirements.txt (line 3))
Cleaning up...
No distributions at all found for Paste (from -r /root/mirji/neutron/requirements.txt (line 3))
Storing complete log in /root/mirji/neutron/.tox/py27/tmp/pseudo-home/.pip/pip.log

ERROR: could not install deps [-r/root/mirji/neutron/requirements.txt, -r/root/mirji/neutron/test-requirements.txt]
______________________________________________________ summary _______________________________________________________
ERROR: py27: could not install deps [-r/root/mirji/neutron/requirements.txt, -r/root/mirji/neutron/test-requirements.txt]

Thanks in advances,
Koteswar


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

On 2014-09-12 10:14:00 +0000 (+0000), Kelam, Koteswara Rao wrote:
I am trying to run unit test cases in neutron code using “tox –e
py27” but it is not working.
[...]

You already posted this development question to the openstack-dev
list, which is the appropriate place for such discussions. Please do
not post the same question to multiple mailing lists solely to
broaden its audience.
--
Jeremy Stanley

[Openstack] Fwd: [Openstack-operators] Nova HTTPConnectionPool Error

Hi,
I traversed all of the documents and Q/A wesites but I could't find
the solution. My nova is down because of an unexpected error logged at
nova-api.log file(see the message thread).
How can I solve this problem?
Thanks in advance...

---------- Forwarded message ----------
From: Hossein Zabolzadeh zabolzadeh@gmail.com
Date: Fri, 12 Sep 2014 13:31:25 +0430
Subject: Re: [Openstack-operators] Nova HTTPConnectionPool Error
To: Razique Mahroua razique.mahroua@gmail.com

I fixed it. But now in my nova-compute.log a new error meesage was shown:
unexpected error while running command. command: sudo nova-rootwrap
/etc/nova/rootwrap.conf iptables-restore -c
exit code: 2
stdout: ''
stderr: "iptables-restore v1.4.21: iptables-restore: unable to
initialize table 'nat'

How can I fix it?
My iptable_filter kernel module is also loaded.

On 9/12/14, Razique Mahroua razique.mahroua@gmail.com wrote:

Check your nova.conf to make sure:
A- You are not using any credentials
B- You are and they match the ones you are using for RabbitMQ

On Sep 11, 2014, at 13:27, Hossein Zabolzadeh zabolzadeh@gmail.com wrote:

My nova-compute.log contains:
Connecting to AMQP server on localhost:5672
ERROR oslo.messaging.drivers.implrabbit [-] AMQP server
localhost:6572 closed the connection. Check login credentials: Socket
closed

On 9/12/14, Razique Mahroua razique.mahroua@gmail.com wrote:

Hi, look into /var/logs/nova/nova-compute.log to understand why the
service
isn’t started!

On Sep 11, 2014, at 13:16, Hossein Zabolzadeh zabolzadeh@gmail.com
wrote:

Hi there,
After successful installation of both keystone and glance, my Nova
service didn't work. The following error was occured when I executed:
'nova list'


ERROR: HTTPConnectionPool(host='localhost', port=8774): Max retries
exceeded with url: /v2/19934884vr78as87437483bb1/servers/detail
(caused by : [errno 111] Connection Refused)

Can someone help me to fix it?
Thanks in advance.


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

I solved my problem(see the following error message), by building a
new kernel with nfnat and nfnatipv4 enabled. The problem was
occured becasue of the two nf
nat and nfnatipv4 netfilter parameters
were disabled in my old kernel.
Another option is download a new kernel-image with netfilter nfnat
and nf
nat_ipv4 enabled, instead of building a new kernel.


stderr: "iptables-restore v1.4.21: iptables-restore: unable to
initialize table 'nat'

On 9/12/14, Hossein Zabolzadeh zabolzadeh@gmail.com wrote:
Hi,
I traversed all of the documents and Q/A wesites but I could't find
the solution. My nova is down because of an unexpected error logged at
nova-api.log file(see the message thread).
How can I solve this problem?
Thanks in advance...

---------- Forwarded message ----------
From: Hossein Zabolzadeh zabolzadeh@gmail.com
Date: Fri, 12 Sep 2014 13:31:25 +0430
Subject: Re: [Openstack-operators] Nova HTTPConnectionPool Error
To: Razique Mahroua razique.mahroua@gmail.com

I fixed it. But now in my nova-compute.log a new error meesage was shown:
unexpected error while running command. command: sudo nova-rootwrap
/etc/nova/rootwrap.conf iptables-restore -c
exit code: 2
stdout: ''
stderr: "iptables-restore v1.4.21: iptables-restore: unable to
initialize table 'nat'

How can I fix it?
My iptable_filter kernel module is also loaded.

On 9/12/14, Razique Mahroua razique.mahroua@gmail.com wrote:

Check your nova.conf to make sure:
A- You are not using any credentials
B- You are and they match the ones you are using for RabbitMQ

On Sep 11, 2014, at 13:27, Hossein Zabolzadeh zabolzadeh@gmail.com
wrote:

My nova-compute.log contains:
Connecting to AMQP server on localhost:5672
ERROR oslo.messaging.drivers.implrabbit [-] AMQP server
localhost:6572 closed the connection. Check login credentials: Socket
closed

On 9/12/14, Razique Mahroua razique.mahroua@gmail.com wrote:

Hi, look into /var/logs/nova/nova-compute.log to understand why the
service
isn’t started!

On Sep 11, 2014, at 13:16, Hossein Zabolzadeh zabolzadeh@gmail.com
wrote:

Hi there,
After successful installation of both keystone and glance, my Nova
service didn't work. The following error was occured when I executed:
'nova list'


ERROR: HTTPConnectionPool(host='localhost', port=8774): Max retries
exceeded with url: /v2/19934884vr78as87437483bb1/servers/detail
(caused by : [errno 111] Connection Refused)

Can someone help me to fix it?
Thanks in advance.


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] [cinder][filters] custom filters

Hello guys,

Could somebody please point me to some documentation on how to configure
other filters in cinder ? I've written a filter but could not figure out
how to use in series with other filters.
Copying the filter to the filters directory and setting the
'schedulerdefaultfilters' in cinder.conf did not do the trick.
I'm getting : SchedulerHostFilterNotFound: Scheduler Host Filter
filter1x1 could not be found.

Best Regards,


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] SWIFT AND RACKSPACE CLOUDFILES API

Is there an open source middleware/api-extender that supports setting CDN
cache control at the account, container, or object level?

Something similar to what Rackspace supports?

http://www.bybe.net/blog/how-to-fix-rackspace-file-cloud-leverage-browser-caching-via-api-ssh.html

Thanks!


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

rackspace dev's wrote, opensourced, and deploy the Swift Origin Service
(SOS) middleware:

https://github.com/dpgoetz/sos

So you could look at that ;)

-Clay

On Fri, Sep 12, 2014 at 12:41 PM, Brent Troge brenttroge2016@gmail.com
wrote:

Is there an open source middleware/api-extender that supports setting CDN
cache control at the account, container, or object level?

Something similar to what Rackspace supports?

http://www.bybe.net/blog/how-to-fix-rackspace-file-cloud-leverage-browser-caching-via-api-ssh.html

Thanks!


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] icehouse install stuck on cinder , page 80

hi,
I am working through icehouse install guide for ubuntu .
and stuck on page 80 seeing the following error messages . any idea ?
thanks !

root@controller:/var/log/cinder# keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ volume / {print $2}') \
--publicurl=http://controller:8776/v1/%(tenant_id)s \
--internalurl=http://controller:8776/v1/%(tenant_id)s \
--adminurl=http://controller:8776/v1/%(tenant_id)s
usage: keystone [--version] [--timeout ]
[--os-username ]
[--os-password ]
[--os-tenant-name ]
[--os-tenant-id ] [--os-auth-url ]
[--os-region-name ]
[--os-identity-api-version ]
[--os-token ]
[--os-endpoint ]
[--os-cacert ] [--insecure]
[--os-cert ] [--os-key ] [--os-cache]
[--force-new-token] [--stale-duration ]
...
keystone: error: unrecognized arguments: 21b81cc68515472abd723b00b3909af5
39e0e9f2a158456891e85f1af308ca0b 5ecdab1834f1462987db10884fe44a90
929ab81e9dcf40469cead70e92dbf309 9efca880f92547ed8595402617cac04e
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Look at keystone service list by itself and see what line two outputs. You
can then decipher what awk '/ volume / {print $2}' contains.

What you want to happen is for the service-id to match the volume service
ID.

On Fri, Sep 12, 2014 at 7:52 PM, b t 905ben@gmail.com wrote:

hi,
I am working through icehouse install guide for ubuntu .
and stuck on page 80 seeing the following error messages . any idea ?
thanks !

root@controller:/var/log/cinder# keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ volume / {print $2}') \
--publicurl=http://controller:8776/v1/%(tenant_id)s \
--internalurl=http://controller:8776/v1/%(tenant_id)s \
--adminurl=http://controller:8776/v1/%(tenant_id)s
usage: keystone [--version] [--timeout ]
[--os-username ]
[--os-password ]
[--os-tenant-name ]
[--os-tenant-id ] [--os-auth-url ]
[--os-region-name ]
[--os-identity-api-version ]
[--os-token ]
[--os-endpoint ]
[--os-cacert ] [--insecure]
[--os-cert ] [--os-key ] [--os-cache]
[--force-new-token] [--stale-duration ]
...
keystone: error: unrecognized arguments: 21b81cc68515472abd723b00b3909af5
39e0e9f2a158456891e85f1af308ca0b 5ecdab1834f1462987db10884fe44a90
929ab81e9dcf40469cead70e92dbf309 9efca880f92547ed8595402617cac04e


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack] cinder create error

I am installing icehouse in ubuntu.
now get to point to create volume and I got status error , and sometimes
stuck on creating .
any idea ?
here is the log . thanks !

root@controller:/var/log/cinder#
root@controller:/var/log/cinder# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size |
Volume Type | Bootable | Attached to |
+--------------------------------------+--------+--------------+------+-------------+----------+-------------+
| 1e8b59be-454c-4532-a2ec-8457c0a04486 | error | myVolume | 1 |
None | false | |
| acfdad32-f81c-487f-8afa-19ee8ba6c989 | error | myVolume | 1 |
None | false | |
| b2cd112b-ecef-4c1e-9501-ebda34492657 | error | my-disk | 2 |
None | false | |
+--------------------------------------+--------+--------------+------+-------------+----------+-------------+

root@controller:/var/log/cinder# more cinder-scheduler.log
2014-09-12 21:58:02.875 10328 AUDIT cinder.service [-] Starting
cinder-scheduler node (version 2014.1.2)
2014-09-12 21:58:02.933 10328 INFO oslo.messaging.drivers.implrabbit
[req-d3ab770b-8cd0-4d5c-8939-6ef3ad8f92ca - - - - -] Connected to AMQP
server on controller:5672
2014-09-12 21:58:03.789 10328 INFO oslo.messaging.drivers.implrabbit [-]
Connected to AMQP server on controller:5672
2014-09-12 21:59:10.423 10328 WARNING cinder.context [-] Arguments dropped
when creating context: {'user': u'33444db757c14559add65a2e606c2102',
'tenant': u'd4c3bf6ee2af44129fc38bf76f4a5623', 'use
ridentity': u'33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -'}
2014-09-12 21:59:11.028 10328 ERROR cinder.scheduler.flows.create
volume
[req-03be4b43-29d2-4208-8924-853840b9214b 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] Failed
to schedulecreatevolume: No valid host was found.
2014-09-12 22:47:41.737 10328 WARNING cinder.context [-] Arguments dropped
when creating context: {'user': u'33444db757c14559add65a2e606c2102',
'tenant': u'd4c3bf6ee2af44129fc38bf76f4a5623', 'use
ridentity': u'33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -'}
2014-09-12 22:47:41.757 10328 ERROR cinder.scheduler.flows.create
volume
[req-5c268fcf-5c4d-49ed-be1b-0460411e76b0 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] Failed
to schedulecreatevolume: No valid host was found.
2014-09-12 22:56:01.742 10328 WARNING cinder.context [-] Arguments dropped
when creating context: {'user': u'dfbdec26a06a4c5cb35f994937b092fb',
'tenant': u'0faf8d7dfdac4a53af195b5436992976', 'use
ridentity': u'dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -'}
2014-09-12 22:56:01.765 10328 ERROR cinder.scheduler.flows.create
volume
[req-230d0a9c-851c-4612-9549-cc9fb07a2197 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] Failed
to schedulecreatevolume: No valid host was found.
root@controller:/var/log/cinder#

2014-09-12 22:47:48.097 10354 INFO eventlet.wsgi.server [-] (10354)
accepted ('192.168.1.80', 44370)
2014-09-12 22:47:48.192 10354 INFO cinder.api.openstack.wsgi
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] GET http://controll
er:8776/v1/d4c3bf6ee2af44129fc38bf76f4a5623/volumes/detail
2014-09-12 22:47:48.228 10354 AUDIT cinder.api.v1.volumes
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] vol={'migrationstatus
': None, 'availability
zone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 13, 2, 47, 41), 'providergeometry': None,
'snapshot
id': None, 'ec2id': None, 'mountpoint'
: None, 'deleted
at': None, 'id': u'da4108f1-f419-4f02-8e0d-29ca5d3c9c0f',
'size': 2L, 'userid': u'33444db757c14559add65a2e606c2102', 'attachtime':
None, 'attachedhost': None, 'displaydescrip
tion': None, 'volumeadminmetadata': [], 'encryptionkeyid': None,
'projectid': u'd4c3bf6ee2af44129fc38bf76f4a5623', 'launchedat': None,
'scheduledat': None, 'status': u'error', 'volumetype
id': None, 'deleted': False, 'providerlocation': None, 'host': None,
'sourcevolid': None, 'providerauth': None, 'displayname': u'my-disk',
'instance
uuid': None, 'bootable': False, 'created_
at': datetime.datetime(2014, 9, 13, 2, 47, 41), 'attachstatus':
u'detached', 'volume
type': None, 'nameid': None, 'volumemetadata': []}
2014-09-12 22:47:48.229 10354 AUDIT cinder.api.v1.volumes
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] vol={'migration
status
': None, 'availabilityzone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 13, 1, 59, 11), 'provider
geometry': None,
'snapshotid': None, 'ec2id': None, 'mountpoint'
: None, 'deletedat': None, 'id': u'0b5a3e41-c806-4931-9767-5c17575dcf7d',
'size': 1L, 'user
id': u'33444db757c14559add65a2e606c2102', 'attachtime':
None, 'attached
host': None, 'displaydescrip
tion': None, 'volume
adminmetadata': [], 'encryptionkeyid': None,
'project
id': u'd4c3bf6ee2af44129fc38bf76f4a5623', 'launchedat': None,
'scheduled
at': None, 'status': u'error', 'volumetype
_id': None, 'deleted': False, 'provider
location': None, 'host': None,
'sourcevolid': None, 'providerauth': None, 'displayname': u'myVolume',
'instance
uuid': None, 'bootable': False, 'created
at': datetime.datetime(2014, 9, 13, 1, 59, 10), 'attachstatus':
u'detached', 'volumetype': None, 'nameid': None, 'volumemetadata': []}
2014-09-12 22:47:48.230 10354 AUDIT cinder.api.v1.volumes
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] vol={'migrationstatus
': None, 'availability
zone': u'nova', 'terminatedat': None, 'updatedat':
None, 'providergeometry': None, 'snapshotid': None, 'ec2id': None,
'mountpoint': None, 'deleted
at': None, 'id': u'3
78f6e84-db04-4375-813d-ea112ef247df', 'size': 1L, 'userid':
u'33444db757c14559add65a2e606c2102', 'attach
time': None, 'attachedhost':
None, 'display
description': None, 'volumeadminmetadata':
[], 'encryptionkeyid': None, 'projectid':
u'd4c3bf6ee2af44129fc38bf76f4a5623', 'launched
at': None, 'scheduledat':
None, 'status': u'creating', 'volume
typeid': None, 'deleted': False, 'pro
vider
location': None, 'host': None, 'sourcevolid': None, 'providerauth':
None, 'displayname': u'myVolume', 'instanceuuid': None, 'bootable':
False, 'createdat': datetime.datetime(2014, 9, 1
1, 4, 31, 49), 'attach
status': u'detached', 'volumetype': None,
'
nameid': None, 'volumemetadata': []}
2014-09-12 22:47:48.233 10354 INFO cinder.api.openstack.wsgi
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] http://controller:8
776/v1/d4c3bf6ee2af44129fc38bf76f4a5623/volumes/detail returned with HTTP
200
2014-09-12 22:47:48.238 10354 INFO eventlet.wsgi.server
[req-eee31803-7249-403c-90bf-912ed6a0f3e4 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] 192.168.1.80 - - [12/Sep
/2014 22:47:48] "GET /v1/d4c3bf6ee2af44129fc38bf76f4a5623/volumes/detail
HTTP/1.1" 200 1509 0.135516
2014-09-12 22:48:34.993 10354 INFO eventlet.wsgi.server [-] (10354)
accepted ('192.168.1.80', 44372)
2014-09-12 22:48:35.121 10354 INFO cinder.api.openstack.wsgi
[req-03cb5178-d3d5-4e48-b502-015faec5bac3 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] GET http://controll
er:8776/v1/d4c3bf6ee2af44129fc38bf76f4a5623/volumes/detail
2014-09-12 22:48:35.156 10354 AUDIT cinder.api.v1.volumes
[req-03cb5178-d3d5-4e48-b502-015faec5bac3 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] vol={'migrationstatus
': None, 'availability
zone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 13, 2, 47, 41), 'providergeometry': None,
'snapshot
id': None, 'ec2id': None, 'mountpoint'
: None, 'deleted
at': None, 'id': u'da4108f1-f419-4f02-8e0d-29ca5d3c9c0f',
'size': 2L, 'userid': u'33444db757c14559add65a2e606c2102', 'attachtime':
None, 'attachedhost': None, 'displaydescrip
tion': None, 'volumeadminmetadata': [], 'encryptionkeyid': None,
'projectid': u'd4c3bf6ee2af44129fc38bf76f4a5623', 'launchedat': None,
'scheduledat': None, 'status': u'error', 'volumetype
id': None, 'deleted': False, 'providerlocation': None, 'host': None,
'sourcevolid': None, 'providerauth': None, 'displayname': u'my-disk',
'instance
uuid': None, 'bootable': False, 'created_
at': datetime.datetime(2014, 9, 13, 2, 47, 41), 'attachstatus':
u'detached', 'volume
type': None, 'nameid': None, 'volumemetadata': []}
2014-09-12 22:48:35.156 10354 AUDIT cinder.api.v1.volumes
[req-03cb5178-d3d5-4e48-b502-015faec5bac3 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] vol={'migration
status
': None, 'availabilityzone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 13, 1, 59, 11), 'provider
geometry': None,
'snapshotid': None, 'ec2id': None, 'mountpoint'
: None, 'deletedat': None, 'id': u'0b5a3e41-c806-4931-9767-5c17575dcf7d',
'size': 1L, 'user
id': u'33444db757c14559add65a2e606c2102', 'attachtime':
None, 'attached
host': None, 'displaydescrip
tion': None, 'volume
adminmetadata': [], 'encryptionkeyid': None,
'project
id': u'd4c3bf6ee2af44129fc38bf76f4a5623', 'launchedat': None,
'scheduled
at': None, 'status': u'error', 'volumetype
_id': None, 'deleted': False, 'provider
location': None, 'host': None,
'sourcevolid': None, 'providerauth': None, 'displayname': u'myVolume',
'instance
uuid': None, 'bootable': False, 'created
at': datetime.datetime(2014, 9, 13, 1, 59, 10), 'attachstatus':
u'detached', 'volumetype': None, 'nameid': None, 'volumemetadata': []}
2014-09-12 22:48:35.157 10354 AUDIT cinder.api.v1.volumes
[req-03cb5178-d3d5-4e48-b502-015faec5bac3 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] vol={'migrationstatus
': None, 'availability
zone': u'nova', 'terminatedat': None, 'updatedat':
None, 'providergeometry': None, 'snapshotid': None, 'ec2id': None,
'mountpoint': None, 'deleted
at': None, 'id': u'3
78f6e84-db04-4375-813d-ea112ef247df', 'size': 1L, 'userid':
u'33444db757c14559add65a2e606c2102', 'attach
time': None, 'attachedhost':
None, 'display
description': None, 'volumeadminmetadata':
[], 'encryptionkeyid': None, 'projectid':
u'd4c3bf6ee2af44129fc38bf76f4a5623', 'launched
at': None, 'scheduledat':
None, 'status': u'creating', 'volume
typeid': None, 'deleted': False, 'pro
vider
location': None, 'host': None, 'sourcevolid': None, 'providerauth':
None, 'displayname': u'myVolume', 'instanceuuid': None, 'bootable':
False, 'createdat': datetime.datetime(2014, 9, 1
1, 4, 31, 49), 'attach
status': u'detached', 'volumetype': None,
'
nameid': None, 'volumemetadata': []}
2014-09-12 22:48:35.160 10354 INFO cinder.api.openstack.wsgi
[req-03cb5178-d3d5-4e48-b502-015faec5bac3 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] http://controller:8
776/v1/d4c3bf6ee2af44129fc38bf76f4a5623/volumes/detail returned with HTTP
200
2014-09-12 22:48:35.167 10354 INFO eventlet.wsgi.server
[req-03cb5178-d3d5-4e48-b502-015faec5bac3 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] 192.168.1.80 - - [12/Sep
/2014 22:48:35] "GET /v1/d4c3bf6ee2af44129fc38bf76f4a5623/volumes/detail
HTTP/1.1" 200 1509 0.168161
2014-09-12 22:49:00.130 10354 INFO eventlet.wsgi.server [-] (10354)
accepted ('192.168.1.80', 44376)
2014-09-12 22:49:00.223 10354 INFO cinder.api.openstack.wsgi
[req-bb7bd127-38b8-4866-83c2-2f6e6fba4000 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] GET http://controll
er:8776/v1/d4c3bf6ee2af44129fc38bf76f4a5623/os-services
2014-09-12 22:49:00.232 10354 INFO cinder.api.openstack.wsgi
[req-bb7bd127-38b8-4866-83c2-2f6e6fba4000 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] http://controller:8
776/v1/d4c3bf6ee2af44129fc38bf76f4a5623/os-services returned with HTTP 403
2014-09-12 22:49:00.242 10354 INFO eventlet.wsgi.server
[req-bb7bd127-38b8-4866-83c2-2f6e6fba4000 33444db757c14559add65a2e606c2102
d4c3bf6ee2af44129fc38bf76f4a5623 - - -] 192.168.1.80 - - [12/Sep
/2014 22:49:00] "GET /v1/d4c3bf6ee2af44129fc38bf76f4a5623/os-services
HTTP/1.1" 403 367 0.109804
2014-09-12 22:50:06.864 10354 INFO eventlet.wsgi.server [-] (10354)
accepted ('192.168.1.80', 44384)
2014-09-12 22:50:07.017 10354 INFO cinder.api.openstack.wsgi
[req-59098387-5f76-4aff-91f7-1e3d77bd42bc dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] GET http://controll
er:8776/v1/0faf8d7dfdac4a53af195b5436992976/os-services
2014-09-12 22:50:07.057 10354 INFO cinder.api.openstack.wsgi
[req-59098387-5f76-4aff-91f7-1e3d77bd42bc dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] http://controller:8
776/v1/0faf8d7dfdac4a53af195b5436992976/os-services returned with HTTP 200
2014-09-12 22:50:07.095 10354 INFO eventlet.wsgi.server
[req-59098387-5f76-4aff-91f7-1e3d77bd42bc dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] 192.168.1.80 - - [12/Sep
/2014 22:50:07] "GET /v1/0faf8d7dfdac4a53af195b5436992976/os-services
HTTP/1.1" 200 428 0.227918
2014-09-12 22:55:42.239 10354 INFO eventlet.wsgi.server [-] (10354)
accepted ('192.168.1.80', 44387)
2014-09-12 22:55:42.276 10354 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): controller
2014-09-12 22:55:42.710 10354 INFO cinder.api.openstack.wsgi
[req-979a1d04-adbd-42b4-81a3-f5757f54d2a2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] GET http://controll
er:8776/v1/0faf8d7dfdac4a53af195b5436992976/volumes/detail
2014-09-12 22:55:42.794 10354 AUDIT cinder.api.v1.volumes
[req-979a1d04-adbd-42b4-81a3-f5757f54d2a2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] vol={'migrationstatus
': None, 'availability
zone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 11, 1, 21, 45), 'providergeometry': None,
'snapshot
id': None, 'ec2id': None, 'mountpoint'
: None, 'deleted
at': None, 'id': u'1e8b59be-454c-4532-a2ec-8457c0a04486',
'size': 1L, 'userid': u'dfbdec26a06a4c5cb35f994937b092fb', 'attachtime':
None, 'attachedhost': None, 'displaydescrip
tion': None, 'volumeadminmetadata': [], 'encryptionkeyid': None,
'projectid': u'0faf8d7dfdac4a53af195b5436992976', 'launchedat': None,
'scheduledat': None, 'status': u'error', 'volumetype
id': None, 'deleted': False, 'providerlocation': None, 'host': None,
'sourcevolid': None, 'providerauth': None, 'displayname': u'myVolume',
'instance
uuid': None, 'bootable': False, 'created
at': datetime.datetime(2014, 9, 11, 1, 21, 45), 'attachstatus':
u'detached', 'volumetype': None, 'nameid': None, 'volumemetadata': []}
2014-09-12 22:55:42.795 10354 AUDIT cinder.api.v1.volumes
[req-979a1d04-adbd-42b4-81a3-f5757f54d2a2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] vol={'migrationstatus
': None, 'availability
zone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 11, 1, 21, 1), 'providergeometry': None,
'snapshot
id': None, 'ec2id': None, 'mountpoint':
None, 'deleted
at': None, 'id': u'acfdad32-f81c-487f-8afa-19ee8ba6c989',
'size': 1L, 'userid': u'dfbdec26a06a4c5cb35f994937b092fb', 'attachtime':
None, 'attachedhost': None, 'displaydescript
ion': None, 'volumeadminmetadata': [], 'encryptionkeyid': None,
'projectid': u'0faf8d7dfdac4a53af195b5436992976', 'launchedat': None,
'scheduledat': None, 'status': u'error', 'volumetype_
id': None, 'deleted': False, 'providerlocation': None, 'host': None,
'source
volid': None, 'providerauth': None, 'displayname': u'myVolume',
'instanceuuid': None, 'bootable': False, 'created
at': datetime.datetime(2014, 9, 11, 1, 21, 1), 'attachstatus':
u'detached', 'volume
type': None, 'nameid': None, 'volumemetadata': []}
2014-09-12 22:55:42.939 10354 INFO cinder.api.openstack.wsgi
[req-979a1d04-adbd-42b4-81a3-f5757f54d2a2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] http://controller:8
776/v1/0faf8d7dfdac4a53af195b5436992976/volumes/detail returned with HTTP
200
2014-09-12 22:55:42.947 10354 INFO eventlet.wsgi.server
[req-979a1d04-adbd-42b4-81a3-f5757f54d2a2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] 192.168.1.80 - - [12/Sep
/2014 22:55:42] "GET /v1/0faf8d7dfdac4a53af195b5436992976/volumes/detail
HTTP/1.1" 200 1311 0.699988
2014-09-12 22:56:01.516 10354 INFO eventlet.wsgi.server [-] (10354)
accepted ('192.168.1.80', 44390)
2014-09-12 22:56:01.630 10354 INFO cinder.api.openstack.wsgi
[req-230d0a9c-851c-4612-9549-cc9fb07a2197 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] POST http://control
ler:8776/v1/0faf8d7dfdac4a53af195b5436992976/volumes
2014-09-12 22:56:01.634 10354 AUDIT cinder.api.v1.volumes
[req-230d0a9c-851c-4612-9549-cc9fb07a2197 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] Create volume of 2 GB
2014-09-12 22:56:01.747 10354 AUDIT cinder.api.v1.volumes
[req-230d0a9c-851c-4612-9549-cc9fb07a2197 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] vol={'migration
status
': None, 'availabilityzone': 'nova', 'terminatedat': None,
'reservations': ['fbff06b4-df12-42b9-944b-e6720cc43051',
'ad9c2f94-35b2-49c1-ac3c-ce7aabbb2b58'], 'updatedat': None,
'provider
geomet
ry': None, 'snapshotid': None, 'ec2id': None, 'mountpoint': None,
'deletedat': None, 'id': 'b2cd112b-ecef-4c1e-9501-ebda34492657', 'size':
2, 'user
id': u'dfbdec26a06a4c5cb35f994937b092fb', 'a
ttachtime': None, 'attachedhost': None, 'displaydescription': None,
'volume
adminmetadata': [], 'encryptionkeyid': None, 'projectid':
u'0faf8d7dfdac4a53af195b5436992976', 'launchedat': No
ne, 'scheduled
at': None, 'status': 'creating', 'volumetypeid': None,
'deleted': False, 'providerlocation': None, 'host': None, 'sourcevolid':
None, 'providerauth': None, 'displayname': u'm
y-disk', 'instanceuuid': None, 'bootable': False, 'createdat':
datetime.datetime(2014, 9, 13, 2, 56, 1, 688573), 'attachstatus':
'detached', 'volume
type': None, 'nameid': None, 'volumemeta
data': [], 'metadata': {}}
2014-09-12 22:56:01.750 10354 INFO cinder.api.openstack.wsgi
[req-230d0a9c-851c-4612-9549-cc9fb07a2197 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] http://controller:8
776/v1/0faf8d7dfdac4a53af195b5436992976/volumes returned with HTTP 200
2014-09-12 22:56:01.763 10354 INFO eventlet.wsgi.server
[req-230d0a9c-851c-4612-9549-cc9fb07a2197 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] 192.168.1.80 - - [12/Sep
/2014 22:56:01] "POST /v1/0faf8d7dfdac4a53af195b5436992976/volumes
HTTP/1.1" 200 601 0.240412
2014-09-12 22:56:06.174 10354 INFO eventlet.wsgi.server [-] (10354)
accepted ('192.168.1.80', 44392)
2014-09-12 22:56:06.248 10354 INFO cinder.api.openstack.wsgi
[req-3a170e85-73e5-47e6-be63-7f8ad00c87d2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] GET http://controll
er:8776/v1/0faf8d7dfdac4a53af195b5436992976/volumes/detail
2014-09-12 22:56:06.291 10354 AUDIT cinder.api.v1.volumes
[req-3a170e85-73e5-47e6-be63-7f8ad00c87d2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] vol={'migration
status
': None, 'availabilityzone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 13, 2, 56, 1), 'provider
geometry': None,
'snapshotid': None, 'ec2id': None, 'mountpoint':
None, 'deletedat': None, 'id': u'b2cd112b-ecef-4c1e-9501-ebda34492657',
'size': 2L, 'user
id': u'dfbdec26a06a4c5cb35f994937b092fb', 'attachtime':
None, 'attached
host': None, 'displaydescript
ion': None, 'volume
adminmetadata': [], 'encryptionkeyid': None,
'project
id': u'0faf8d7dfdac4a53af195b5436992976', 'launchedat': None,
'scheduled
at': None, 'status': u'error', 'volumetype
id': None, 'deleted': False, 'providerlocation': None, 'host': None,
'source
volid': None, 'providerauth': None, 'displayname': u'my-disk',
'instanceuuid': None, 'bootable': False, 'createda
t': datetime.datetime(2014, 9, 13, 2, 56, 1), 'attachstatus': u'detached',
'volume
type': None, 'nameid': None, 'volumemetadata': []}
2014-09-12 22:56:06.292 10354 AUDIT cinder.api.v1.volumes
[req-3a170e85-73e5-47e6-be63-7f8ad00c87d2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] vol={'migration
status
': None, 'availabilityzone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 11, 1, 21, 45), 'provider
geometry': None,
'snapshotid': None, 'ec2id': None, 'mountpoint'
: None, 'deletedat': None, 'id': u'1e8b59be-454c-4532-a2ec-8457c0a04486',
'size': 1L, 'user
id': u'dfbdec26a06a4c5cb35f994937b092fb', 'attachtime':
None, 'attached
host': None, 'displaydescrip
tion': None, 'volume
adminmetadata': [], 'encryptionkeyid': None,
'project
id': u'0faf8d7dfdac4a53af195b5436992976', 'launchedat': None,
'scheduled
at': None, 'status': u'error', 'volumetype
_id': None, 'deleted': False, 'provider
location': None, 'host': None,
'sourcevolid': None, 'providerauth': None, 'displayname': u'myVolume',
'instance
uuid': None, 'bootable': False, 'created
at': datetime.datetime(2014, 9, 11, 1, 21, 45), 'attachstatus':
u'detached', 'volumetype': None, 'nameid': None, 'volumemetadata': []}
2014-09-12 22:56:06.293 10354 AUDIT cinder.api.v1.volumes
[req-3a170e85-73e5-47e6-be63-7f8ad00c87d2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] vol={'migrationstatus
': None, 'availability
zone': u'nova', 'terminatedat': None, 'updatedat':
datetime.datetime(2014, 9, 11, 1, 21, 1), 'providergeometry': None,
'snapshot
id': None, 'ec2id': None, 'mountpoint':
None, 'deleted
at': None, 'id': u'acfdad32-f81c-487f-8afa-19ee8ba6c989',
'size': 1L, 'userid': u'dfbdec26a06a4c5cb35f994937b092fb', 'attachtime':
None, 'attachedhost': None, 'displaydescript
ion': None, 'volumeadminmetadata': [], 'encryptionkeyid': None,
'projectid': u'0faf8d7dfdac4a53af195b5436992976', 'launchedat': None,
'scheduledat': None, 'status': u'error', 'volumetype_
id': None, 'deleted': False, 'providerlocation': None, 'host': None,
'source
volid': None, 'providerauth': None, 'displayname': u'myVolume',
'instanceuuid': None, 'bootable': False, 'created
at': datetime.datetime(2014, 9, 11, 1, 21, 1), 'attachstatus':
u'detached', 'volume
type': None, 'nameid': None, 'volume_metadata': []}
2014-09-12 22:56:06.442 10354 INFO cinder.api.openstack.wsgi
[req-3a170e85-73e5-47e6-be63-7f8ad00c87d2 dfbdec26a06a4c5cb35f994937b092fb
0faf8d7dfdac4a53af195b5436992976 - - -] http://controller:8<