settingsLogin | Registersettings

Re: [Openstack] Openstack Digest, Vol 39, Issue 29

0 votes

I after install dashboard while i got error

systemctl status httpd.service -l
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor
preset: disabled)
Drop-In: /usr/lib/systemd/system/httpd.service.d
└─openstack-dashboard.conf
Active: failed (Result: exit-code) since 日 2016-09-18 19:11:52 CST; 14s
ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 50843 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited,
status=0/SUCCESS)
Process: 58465 ExecStartPre=/usr/bin/python
/usr/share/openstack-dashboard/manage.py compress --force (code=exited,
status=1/FAILURE)
Process: 58458 ExecStartPre=/usr/bin/python
/usr/share/openstack-dashboard/manage.py collectstatic --noinput --clear
(code=exited, status=0/SUCCESS)
Main PID: 34243 (code=exited, status=0/SUCCESS)

9月 18 19:11:52 controller python[58465]: CommandError: An error occurred
during rendering
/usr/share/openstack-dashboard/openstackdashboard/templates/stylesheets.html:
/bin/sh: djangopyscss.compressor.DjangoScssFilter: command not found
9月 18 19:11:52 controller python[58465]: Found 'compress' tags in:
9月 18 19:11:52 controller python[58465]:
/usr/lib/python2.7/site-packages/horizon/templates/horizon/
conf.html
9月 18 19:11:52 controller python[58465]:
/usr/share/openstack-dashboard/openstackdashboard/templates/stylesheets.html
9月 18 19:11:52 controller python[58465]:
/usr/lib/python2.7/site-packages/horizon/templates/horizon/_scripts.html
9月 18 19:11:52 controller python[58465]: Compressing...
9月 18 19:11:52 controller systemd[1]: httpd.service: control process
exited, code=exited status=1
9月 18 19:11:52 controller systemd[1]: Failed to start The Apache HTTP
Server.
9月 18 19:11:52 controller systemd[1]: Unit httpd.service entered failed
state.
9月 18 19:11:52 controller systemd[1]: httpd.service failed.

how to resolve?

2016-09-30 20:00 GMT+08:00 openstack-request@lists.openstack.org:

Send Openstack mailing list submissions to
openstack@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
or, via email, send a message with subject or body 'help' to
openstack-request@lists.openstack.org

You can reach the person managing the list at
openstack-owner@lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Openstack digest..."

Today's Topics:

  1. Re: Instances do not have access to internet (Artem Plakunov)
  2. Re: Instances do not have access to internet (Turbo Fredriksson)
  3. Re: Provisioning VMs in Openstack cluster (Turbo Fredriksson)
  4. Re: Multiple availability zones (Turbo Fredriksson)
  5. Re: Multiple availability zones (Tobias Urdin)
  6. Re: Multiple availability zones (Turbo Fredriksson)
  7. DHCP Agent debugging problem (Phani Pawan Padmanabharao)
  8. Re: Instances do not have access to internet (Imran Khakoo)
  9. Re: DHCP Agent debugging problem (Trinath Somanchi)

    1. Re: DHCP Agent debugging problem (Kevin Benton)
    2. Re: [OpenStack] [keystone] How to make keystone highly
      available? (Turbo Fredriksson)
    3. neutron -f json output parsing (Bill Nerenberg)
    4. Re: neutron -f json output parsing (Kevin Benton)
    5. MAGNUM - unable to create a magnum template
      (kamalakannan sanjeevan)
    6. External network connectivity problem (Dav?? ?rn J?hannsson)
    7. Re: neutron -f json output parsing (Akihiro Motoki)

Message: 1
Date: Thu, 29 Sep 2016 14:59:57 +0300
From: Artem Plakunov artacc@lvk.cs.msu.su
To: Imran Khakoo imran.khakoo@netronome.com
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Instances do not have access to internet
Message-ID: 57ED023D.10105@lvk.cs.msu.su
Content-Type: text/plain; charset="utf-8"; Format="flowed"

You are right, the router must have an interface in external network and
the external network must have a subnet

How exactly did you try to create subnet? I guess using a CLI command?
It looks like you didn't specify the network which the new subnet should
belong to.

Try following this doc about creating an external network subnet:
http://docs.openstack.org/juno/install-guide/install/
apt/content/neutron_initial-external-network.html

If you're still getting any errors, look into logs for details:
/var/log/neutron/server.log or /var/log/neutron-all.log

29.09.2016 13:07, Imran Khakoo ?????:

Hi there,
I deleted all the rules and added them back one by one, seeing if each
change suddenly allowed connectivity. No improvement, unfortunately.

My current rules:
Direction

Ether Type

IP Protocol

Port Range

Remote IP Prefix

Remote Security Group

Actions

  Ingress         IPv4    ICMP    Any     0.0.0.0/0 <

http://0.0.0.0/0 - Delete Rule

  Egress  IPv4    ICMP    Any     0.0.0.0/0 <http://0.0.0.0/0>

- Delete Rule

  Ingress         IPv4    TCP     1 - 65535       0.0.0.0/0 <

http://0.0.0.0/0 -
Delete Rule

  Egress  IPv4    TCP     1 - 65535       0.0.0.0/0 <

http://0.0.0.0/0 - Delete
Rule

  Ingress         IPv4    TCP     1 - 65535       -       default
 Delete Rule

  Egress  IPv4    TCP     1 - 65535       -       default

Delete Rule
Displaying 6 items

Going back to my instances, pinging google:

ubuntu@throwaway:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 10.10.0.1 icmpseq=17 Destination Net Unreachable
From 10.10.0.1 icmp
seq=18 Destination Net Unreachable

ubuntu@throwaway:~$ ip route
default via 10.10.0.1 dev eth0
10.10.0.0/16 http://10.10.0.0/16 dev eth0 proto kernel scope link
src 10.10.0.4
169.254.169.254 via 10.10.0.1 dev eth0

ubuntu@throwaway:~$ ip neigh
10.10.0.2 dev eth0 lladdr fa:16:3e:d7:e1:d5 STALE
10.10.0.1 dev eth0 lladdr fa:16:3e:7c:cf:b1 REACHABLE
10.10.0.3 dev eth0 lladdr fa:16:3e:13:c8:8b STALE

So the gateway is 10.10.0.1 and the VM can reach it, but it somehow
can't route to 8.8.8.8. Looking at my openstack router, I notice that
it doesn't have a public IP address, only an internal one.

Name Fixed IPs Status Type Admin State Actions

  (af24a36f-6790)

http://10.1.1.147/project/networks/ports/af24a36f-6790-
4024-8ee2-b4fbbcb856ba/detail>

  • 10.10.0.1

    Active Internal Interface UP Delete Interface

From other advice I received, the router should have both a public
interface and a private one. So when I try to add a public interface,
it requires me to first add a subnet.

So I'm guessing I should be creating a subnet on the extnet, in order
to attach the external interface to it. I get the following error:
*Error: *Failed to create subnet "172.26.1.0/24
http://172.26.1.0/24" for network "None": The resource could not be
found. Neutron server returns request
ids:
['req-0e2edc22-c6a8-4038-89fd-26feb25393c6']

On Wed, Sep 28, 2016 at 7:23 PM, Turbo Fredriksson <turbo@bayour.com
turbo@bayour.com> wrote:

On Sep 28, 2016, at 5:32 PM, Imran Khakoo wrote:

> I did add this rule to default security group, that was the
first thing
> before I even launched an instance.

Yeah, that should have done it.

> Egress  IPv4 Any  Any 0.0.0.0/0 <http://0.0.0.0/0> -
> Egress  IPv4 ICMP Any         -       default
> Egress  IPv4 TCP   80 (HTTP)  -       default
> Egress  IPv4 TCP  443 (HTTPS) -       default
> Ingress IPv4 Any  Any         -       default
> Ingress IPv4 ICMP Any0.0.0.0/0 <http://0.0.0.0/0> -
> Ingress IPv4 TCP  22 (SSH)0.0.0.0/0 <http://0.0.0.0/0> -

What strikes me is the sixth column. It is/should be the "Remote
Security Group"
column.

I'm a little unsure on how to use that, but if all those rules
come from
the 'default' security group, then you'll probably end up with a loop
or something..


But because of the two Any/Any rules, you would not need the
80/443 rules.
Nor the 22 one.
--
Life sucks and then you die


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160929/a13959fa/attachment-0001.html>


Message: 2
Date: Thu, 29 Sep 2016 13:34:21 +0100
From: Turbo Fredriksson turbo@bayour.com
To: Imran Khakoo imran.khakoo@netronome.com
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Instances do not have access to internet
Message-ID: 943F844E-603E-48B2-AEA6-307EC1E15E5D@bayour.com
Content-Type: text/plain; charset=us-ascii

On Sep 29, 2016, at 11:07 AM, Imran Khakoo wrote:

ubuntu@throwaway:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 10.10.0.1 icmpseq=17 Destination Net Unreachable
From 10.10.0.1 icmp
seq=18 Destination Net Unreachable

As far as I can tell, your security groups is just fine now.

But can you just do a "traceroute -n 8.8.8.8" as well?

Looking at your screenshot of your setup (first mail), it
looks to me that the router have both an internal (the one
connected to "the cloud" - "imrankhakoonet" I'm guessing)
and an external (the one connected to "the globe" - "ext_net"
if I'm to guess) interface.

Our assumption have been that the router isn't routing (which
is why I suggested twiddling with the SGs). If this still
holds true, then the traceroute I've asked you to run above
should reach the router (it's not absolutly clear, but I'm
guessing "10.10.0.1" if the 'ip route' command is correct -
please triple check by going into the router config and look
that it have a "Internal Interface" with that IP) but NOT go
anywhere beyond that.

Looking at your first mail again, I just noticed that the
supposedly external network (the "ext_" part of the network
name - if this is NOT the external, then you should rename
it :) is NOT set as 'External' (the 'External=No' entry).

In the router, do you have an interface with the label
"External Gateway"? You shouldn't have, if I'm correct..

This is somewhat a misnomer - it is NOT the IP of the gateway,
it is the routers gateway IP.. Hmm, that doesn't make sense..

If your actual, real gateway (the one with 'Net access) is,
for example, "192.168.1.1/24", then that "External Gateway" in
the OS router needs to be something like "192.168.1.253/24"
(an unused IP on the same network as the real GW/FW/NAT/Whatever).


Message: 3
Date: Thu, 29 Sep 2016 13:52:29 +0100
From: Turbo Fredriksson turbo@bayour.com
To: openstack List openstack@lists.openstack.org
Subject: Re: [Openstack] Provisioning VMs in Openstack cluster
Message-ID: 08F93FF3-BCF2-4B40-B79C-6CDA873CD58D@bayour.com
Content-Type: text/plain; charset=us-ascii

On Sep 29, 2016, at 8:57 AM, Tobias Urdin wrote:

To just follow my hunch, have you configured vifpluggingtimeout in
nova.conf or is it the default value of 300?
We have vifpluggingtimeout=5, you should try that. We are live on
Liberty and are slowly upgrading to Mitaka for reference.

Mine was the default 300. I've changed that now and I'll do some tests
to see if that changes things. Thanx for the tip.


Message: 4
Date: Thu, 29 Sep 2016 14:02:16 +0100
From: Turbo Fredriksson turbo@bayour.com
To: openstack List openstack@lists.openstack.org
Subject: Re: [Openstack] Multiple availability zones
Message-ID: C3E8ED7C-D1F6-465D-AB24-FF3A05095045@bayour.com
Content-Type: text/plain; charset=us-ascii

On Sep 29, 2016, at 8:52 AM, Tobias Urdin wrote:

If I have understood it correctly your primary question is about
availability zones.

Technically I guess that's right, but not so much about what it/they
are and how they're used, but more like "can a controller manage multiple
zones"..

And with "a controller" I'm thinking Neutron, Keystone, Cinder etc, etc.
--
You know, boys, a nuclear reactor is a lot like a woman.
You just have to read the manual and press the right buttons
- Homer Simpson


Message: 5
Date: Thu, 29 Sep 2016 13:11:48 +0000
From: Tobias Urdin tobias.urdin@crystone.com
To: Turbo Fredriksson turbo@bayour.com
Cc: openstack List openstack@lists.openstack.org
Subject: Re: [Openstack] Multiple availability zones
Message-ID: 7b15a1113a6842f2a5f9647aeb5e4cd8@mb01.staff.ognet.se
Content-Type: text/plain; charset="us-ascii"

Yes because a availability zone does not "belong" to anything.

It's simply a group of resources defined in your nova database to make
scheduling decisions.

Best regards

On 09/29/2016 03:08 PM, Turbo Fredriksson wrote:

On Sep 29, 2016, at 8:52 AM, Tobias Urdin wrote:

If I have understood it correctly your primary question is about
availability zones.
Technically I guess that's right, but not so much about what it/they
are and how they're used, but more like "can a controller manage multiple
zones"..

And with "a controller" I'm thinking Neutron, Keystone, Cinder etc, etc.
--
You know, boys, a nuclear reactor is a lot like a woman.
You just have to read the manual and press the right buttons
- Homer Simpson


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack

Message: 6
Date: Thu, 29 Sep 2016 14:45:25 +0100
From: Turbo Fredriksson turbo@bayour.com
To: openstack List openstack@lists.openstack.org
Subject: Re: [Openstack] Multiple availability zones
Message-ID: FEE4FF59-323B-42E1-89C6-B1E83FE1DA18@bayour.com
Content-Type: text/plain; charset=us-ascii

On Sep 29, 2016, at 2:11 PM, Tobias Urdin wrote:

Yes because a availability zone does not "belong" to anything.

Well, the comments in the config file(s) seem to differ from that
statement:

    # Default value of availability zone hints. The availability zone
    # aware schedulers use this when the resources

availabilityzonehints
# is empty. Multiple availability zones can be specified by a comma
# separated string. This value can be empty. In this case, even if
# availabilityzonehints for a resource is empty, availability
zone
# is considered for high availability while scheduling the
resource.
# (list value)
defaultavailabilityzones = nova

    # Availability zone of this node (string value)
    availability_zone = nova

Or at least, might be a little .. "fuzzy". Does the "Multiple az can be
specified
with a comma" belong to the 'defaultavailabilityzones' (the suffixing
's' in
that seem to indicate that - plural) or to the 'availability_zone'??

How can there be multiple defaults (!!) but only one specified!?

So what you're saying is that if I specify, for example:

    default_availability_zones = nova,users
    availability_zone = nova

then that service (Neutron in this case) should be able to deal with both
my
AZs? What about the i/o from Neutron? Will it be automatic, or do I have to
'link' my two Neutron (etc) controllers somehow? Or are they linked "via"
RabbitMQ and/or MySQL?
--
Turbo Fredriksson
turbo@bayour.com


Message: 7
Date: Thu, 29 Sep 2016 14:17:57 +0000
From: Phani Pawan Padmanabharao phani.pawan@huawei.com
To: "openstack@lists.openstack.org" openstack@lists.openstack.org
Subject: [Openstack] DHCP Agent debugging problem
Message-ID: <DC1AA823D68053408AA3299B7CB30AB6016BDF9D@lhreml507-mbs>
Content-Type: text/plain; charset="us-ascii"

Hello All,

Me and my team members are trying to find out the scenarios and the
effects when an agent dies or does not function properly in OpenStack.
We are trying to debug the Dhcp Agent in Neutron. But apart from the
console logs (of VMs) and the logs of Agent and Neutron server, is there
any way to know that the Dhcp release did not happen as expected?
For example, in the L3 Agent, we can directly see the iptable rules inside
the q-router namespace.
Is there any such mechanism for Dhcp Agent, where we can login to the
q-dhcp namespace and execute some commands to see if the Dhcp release did
not happen?

Thanks in Advance,
Phani
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160929/041151cf/attachment-0001.html>


Message: 8
Date: Thu, 29 Sep 2016 16:22:21 +0200
From: Imran Khakoo imran.khakoo@netronome.com
To: Turbo Fredriksson turbo@bayour.com
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Instances do not have access to internet
Message-ID:
<CACS0JTdBrrPv-tpoRSSNKsCdRjN8P_pM=knGgeT5xwz
jb5Leog@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I realized I had forgotten my admin details so I tore my install down and
started afresh.
In the process I realized that the installer was trying to take over the
entire corporate network:
[image: Inline image 1]
I changed the range to an allowable figure and am continuing with the
install now. Will update once I've retried.
Thanks for all the help so far and for being so responsive, guys.

Regards,
Imran

On Thu, Sep 29, 2016 at 2:34 PM, Turbo Fredriksson turbo@bayour.com
wrote:

On Sep 29, 2016, at 11:07 AM, Imran Khakoo wrote:

ubuntu@throwaway:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 10.10.0.1 icmpseq=17 Destination Net Unreachable
From 10.10.0.1 icmp
seq=18 Destination Net Unreachable

As far as I can tell, your security groups is just fine now.

But can you just do a "traceroute -n 8.8.8.8" as well?

Looking at your screenshot of your setup (first mail), it
looks to me that the router have both an internal (the one
connected to "the cloud" - "imrankhakoonet" I'm guessing)
and an external (the one connected to "the globe" - "ext_net"
if I'm to guess) interface.

Our assumption have been that the router isn't routing (which
is why I suggested twiddling with the SGs). If this still
holds true, then the traceroute I've asked you to run above
should reach the router (it's not absolutly clear, but I'm
guessing "10.10.0.1" if the 'ip route' command is correct -
please triple check by going into the router config and look
that it have a "Internal Interface" with that IP) but NOT go
anywhere beyond that.

Looking at your first mail again, I just noticed that the
supposedly external network (the "ext_" part of the network
name - if this is NOT the external, then you should rename
it :) is NOT set as 'External' (the 'External=No' entry).

In the router, do you have an interface with the label
"External Gateway"? You shouldn't have, if I'm correct..

This is somewhat a misnomer - it is NOT the IP of the gateway,
it is the routers gateway IP.. Hmm, that doesn't make sense..

If your actual, real gateway (the one with 'Net access) is,
for example, "192.168.1.1/24", then that "External Gateway" in
the OS router needs to be something like "192.168.1.253/24"
(an unused IP on the same network as the real GW/FW/NAT/Whatever).

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160929/8449bd89/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 101707 bytes
Desc: not available
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160929/8449bd89/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screenshot from 2016-09-29 16-06-40.png
Type: image/png
Size: 76666 bytes
Desc: not available
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160929/8449bd89/attachment-0003.png>


Message: 9
Date: Thu, 29 Sep 2016 16:48:34 +0000
From: Trinath Somanchi trinath.somanchi@nxp.com
To: Phani Pawan Padmanabharao phani.pawan@huawei.com,
"openstack@lists.openstack.org" openstack@lists.openstack.org
Subject: Re: [Openstack] DHCP Agent debugging problem
Message-ID:
<DB3PR04MB079636D760000346AB45E0949DCE0@DB3PR04MB0796.
eurprd04.prod.outlook.com>

Content-Type: text/plain; charset="us-ascii"

there exists a q-dhcp-xxxxx namespace too. Try checking there.

/Trinath


From: Phani Pawan Padmanabharao phani.pawan@huawei.com
Sent: Thursday, September 29, 2016 7:47:57 PM
To: openstack@lists.openstack.org
Subject: [Openstack] DHCP Agent debugging problem

Hello All,

Me and my team members are trying to find out the scenarios and the
effects when an agent dies or does not function properly in OpenStack.
We are trying to debug the Dhcp Agent in Neutron. But apart from the
console logs (of VMs) and the logs of Agent and Neutron server, is there
any way to know that the Dhcp release did not happen as expected?
For example, in the L3 Agent, we can directly see the iptable rules inside
the q-router namespace.
Is there any such mechanism for Dhcp Agent, where we can login to the
q-dhcp namespace and execute some commands to see if the Dhcp release did
not happen?

Thanks in Advance,
Phani
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160929/190105ad/attachment-0001.html>


Message: 10
Date: Thu, 29 Sep 2016 10:05:27 -0700
From: Kevin Benton kevin@benton.pub
To: Phani Pawan Padmanabharao phani.pawan@huawei.com
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] DHCP Agent debugging problem
Message-ID:
<CAOF6JM9j+21=pv9O4DnKJ0-HKw1vagH3E_B79caEuy00zpQw@
mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

If you're looking for dhcp release messages, I believe dnsmasq will log
those to syslog for you.

On Sep 29, 2016 10:35, "Phani Pawan Padmanabharao" <phani.pawan@huawei.com
>
wrote:

Hello All,

Me and my team members are trying to find out the scenarios and the
effects when an agent dies or does not function properly in OpenStack.

We are trying to debug the Dhcp Agent in Neutron. But apart from the
console logs (of VMs) and the logs of Agent and Neutron server, is there
any way to know that the Dhcp release did not happen as expected?

For example, in the L3 Agent, we can directly see the iptable rules
inside
the q-router namespace.

Is there any such mechanism for Dhcp Agent, where we can login to the
q-dhcp namespace and execute some commands to see if the Dhcp release did
not happen?

Thanks in Advance,

Phani


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160929/a54271b8/attachment-0001.html>


Message: 11
Date: Thu, 29 Sep 2016 23:30:23 +0100
From: Turbo Fredriksson turbo@bayour.com
To: openstack List openstack@lists.openstack.org
Subject: Re: [Openstack] [OpenStack] [keystone] How to make keystone
highly available?
Message-ID: 454FA4E8-1A42-4B75-A213-0E4812AA0B93@bayour.com
Content-Type: text/plain; charset=us-ascii

On Sep 21, 2016, at 6:48 AM, Van Leeuwen, Robert wrote:

If I had these constraints I would add a loadbalancer-config on the same
machine that runs the OpenStack apis.

Now that I have multiple Neutron instances, how do I make my routers
HA?

I managed to make the router 'distributed', but there doesn't seem
to be a '--ha' for "neutron router-update" (like there is for
"neutron router-create").


Message: 12
Date: Fri, 30 Sep 2016 10:55:44 +0200
From: Bill Nerenberg bill.nerenberg75@gmail.com
To: openstack@lists.openstack.org
Subject: [Openstack] neutron -f json output parsing
Message-ID:
<CAJJpmPh0f5wb1PcQEPw9EWWye5O3sJc7nTDWbBRqJ2kmOm=Huw@mail.
gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi all

When I run neutron -f json in the command below, the pools value is quoted
so I cannot use it with JSON parsers like jq

For example, this is neutron's problematic output [1] (see the pools line)

$ neutron lbaas-loadbalancer-show ef315cff-1d0a-4327-93c6-e9bf7e544e2d -f
json
{
"description": "",
"adminstateup": false,
"tenantid": "1bcf7ba13bcb496196d72f481bfebb5c",
"provisioning
status": "ACTIVE",
"vipsubnetid": "d02c8267-30be-4cdc-aa4a-a7c1ca6504b8",
"listeners": "",
"vipaddress": "10.0.2.160",
"vip
portid": "07227a77-1afe-466b-9d54-20e8637fc2b0",
"provider": "f5networks",
"pools": "{\"id\": \"1b792ace-0cbf-47cc-a3d5-2140c570ccee\"}",
"id": "ef315cff-1d0a-4327-93c6-e9bf7e544e2d",
"operating
status": "ONLINE",
"name": "test-lbaasv2"
}

Which triggers an error in jq (or other tools)

neutron lbaas-loadbalancer-show ef315cff-1d0a-4327-93c6-e9bf7e544e2d -f
json | jq ".pools.id"
jq: error: Cannot index string with string

If instead I use the followin JSON without the double quotes it works just
fine

$ cat myjson
{
"description": "",
"adminstateup": false,
"tenantid": "1bcf7ba13bcb496196d72f481bfebb5c",
"provisioning
status": "ACTIVE",
"vipsubnetid": "d02c8267-30be-4cdc-aa4a-a7c1ca6504b8",
"listeners": "",
"vipaddress": "10.0.2.160",
"vip
portid": "07227a77-1afe-466b-9d54-20e8637fc2b0",
"provider": "f5networks",
"pools": {"id": "1b792ace-0cbf-47cc-a3d5-2140c570ccee"},
"id": "ef315cff-1d0a-4327-93c6-e9bf7e544e2d",
"operating
status": "ONLINE",
"name": "test-lbaasv2"
}
$ cat myjson | jq ".pools.id"
"1b792ace-0cbf-47cc-a3d5-2140c570ccee"

Questions, questions...

Is it intentional the output of [1] or is it a bug? If it is not a bug and
it is intentional... how is it expected we should be parsing it?

Comments would be greatly appreciated

Many thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160930/564cc5a4/attachment-0001.html>


Message: 13
Date: Fri, 30 Sep 2016 02:16:36 -0700
From: Kevin Benton kevin@benton.pub
To: Bill Nerenberg bill.nerenberg75@gmail.com
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] neutron -f json output parsing
Message-ID:
<CAOF6JOn3jKG6RznU-eZbxSpiVraZO4bLvPP7bqWXLV=sL
QrA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Looks very much like a bug (double json encoding). File a bug on launchpad
against python-neutronclient.

On Sep 30, 2016 05:10, "Bill Nerenberg" bill.nerenberg75@gmail.com
wrote:

Hi all

When I run neutron -f json in the command below, the pools value is
quoted
so I cannot use it with JSON parsers like jq

For example, this is neutron's problematic output [1] (see the pools
line)

$ neutron lbaas-loadbalancer-show ef315cff-1d0a-4327-93c6-e9bf7e544e2d
-f
json
{
"description": "",
"adminstateup": false,
"tenantid": "1bcf7ba13bcb496196d72f481bfebb5c",
"provisioning
status": "ACTIVE",
"vipsubnetid": "d02c8267-30be-4cdc-aa4a-a7c1ca6504b8",
"listeners": "",
"vipaddress": "10.0.2.160",
"vip
portid": "07227a77-1afe-466b-9d54-20e8637fc2b0",
"provider": "f5networks",
"pools": "{\"id\": \"1b792ace-0cbf-47cc-a3d5-2140c570ccee\"}",
"id": "ef315cff-1d0a-4327-93c6-e9bf7e544e2d",
"operating
status": "ONLINE",
"name": "test-lbaasv2"
}

Which triggers an error in jq (or other tools)

neutron lbaas-loadbalancer-show ef315cff-1d0a-4327-93c6-e9bf7e544e2d -f
json | jq ".pools.id"
jq: error: Cannot index string with string

If instead I use the followin JSON without the double quotes it works
just
fine

$ cat myjson
{
"description": "",
"adminstateup": false,
"tenantid": "1bcf7ba13bcb496196d72f481bfebb5c",
"provisioning
status": "ACTIVE",
"vipsubnetid": "d02c8267-30be-4cdc-aa4a-a7c1ca6504b8",
"listeners": "",
"vipaddress": "10.0.2.160",
"vip
portid": "07227a77-1afe-466b-9d54-20e8637fc2b0",
"provider": "f5networks",
"pools": {"id": "1b792ace-0cbf-47cc-a3d5-2140c570ccee"},
"id": "ef315cff-1d0a-4327-93c6-e9bf7e544e2d",
"operating
status": "ONLINE",
"name": "test-lbaasv2"
}
$ cat myjson | jq ".pools.id"
"1b792ace-0cbf-47cc-a3d5-2140c570ccee"

Questions, questions...

Is it intentional the output of [1] or is it a bug? If it is not a bug
and
it is intentional... how is it expected we should be parsing it?

Comments would be greatly appreciated

Many thanks


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160930/133253c4/attachment-0001.html>


Message: 14
Date: Fri, 30 Sep 2016 16:04:55 +0530
From: kamalakannan sanjeevan chirukamalakannan@gmail.com
To: openstack@lists.openstack.org
Subject: [Openstack] MAGNUM - unable to create a magnum template
Message-ID:
<CAHSiE9Z4=or9gS_eFAY6AFOtOU79CRqVqZyWXv4LH=4XQ
G0WRg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi ,

Can anyone have a look at the logs and help me out.

I am using python-magnumclient -2.3.0 . along with magnum installed on
ubuntu 14.04 with Mitaka

CINDER VOLUMES


root@VFSR1:/opt/mesosimage# pvs
PV VG Fmt Attr PSize PFree
/dev/loop2 cinder-volumes lvm2 a-- 250.00g 213.00g
/dev/sda3 ubuntu-vg lvm2 a-- 3.64t 0
root@VFSR1:/opt/mesos
image# vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 7 1 wz--n- 250.00g 213.00g
ubuntu-vg 1 2 0 wz--n- 3.64t 0

root@VFSR1:/opt/mesosimage# openstack image list
+--------------------------------------+----------------------+--------+
| ID | Name | Status |
+--------------------------------------+----------------------+--------+
| f9acd880-f50f-493a-b6ed-46620b7b3481 | ubuntu-mesos | active |
| 94ee6d6e-93fa-47b2-844f-2d8d2ad1a788 | ubuntu-14.04.3-mesos | active |
| affb50c2-ca04-41fa-bf73-48ae526d2b15 | fedora-atomic-latest | active |
| c1c8e84e-12ba-4b05-b382-e57850e5dd6d | cirros | active |
+--------------------------------------+----------------------+--------+
root@VFSR1:/opt/mesos
image# glance -v image-list
+--------------------------------------+--------------------
--+-------------+------------------+-----------+--------+---
-------------------------------+
| ID | Name | Diskformat
| Container
format | Size | Status | Owner
|
+--------------------------------------+--------------------
--+-------------+------------------+-----------+--------+---
-------------------------------+
| c1c8e84e-12ba-4b05-b382-e57850e5dd6d | cirros | qcow2
| bare | 13287936 | active | 4db51f28e56742a6b62333cdd7ec890d
|
| affb50c2-ca04-41fa-bf73-48ae526d2b15 | fedora-atomic-latest | qcow2
| bare | 507928064 | active | 4db51f28e56742a6b62333cdd7ec890d
|
| 94ee6d6e-93fa-47b2-844f-2d8d2ad1a788 | ubuntu-14.04.3-mesos | qcow2
| bare | 753616384 | active | 4db51f28e56742a6b62333cdd7ec890d
|
| f9acd880-f50f-493a-b6ed-46620b7b3481 | ubuntu-mesos | qcow2
| bare | 753616384 | active | 4db51f28e56742a6b62333cdd7ec890d
|
+--------------------------------------+--------------------
--+-------------+------------------+-----------+--------+---
-------------------------------+
root@VFSR1:/opt/mesosimage# openstack keypair list
+--------+-------------------------------------------------+
| Name | Fingerprint |
+--------+-------------------------------------------------+
| mykey | f8:7b:3a:df:1d:eb:b3:12:97:d5:7f:b2:bd:af:77:86 |
| mykey1 | be:eb:0e:05:73:5f:59:5c:07:c4:26:d7:18:98:d9:d5 |
+--------+-------------------------------------------------+
root@VFSR1:/opt/mesos
image# openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
root@VFSR1:/opt/mesosimage# openstack network list
+--------------------------------------+--------------------
-----------------------------------+------------------------
--------------+
| ID |
Name |
Subnets |
+--------------------------------------+--------------------
-----------------------------------+------------------------
--------------+
| 2241a192-589c-48ce-ac31-bd133c02d15b |
public |
58a123cb-37a3-457d-ad23-350fab00cab1 |
| baeb67e9-0612-45cf-bb8e-7945e20bfba7 |
private |
52787572-803c-4688-b3b4-a747c3e73e51 |
| 61d1cae1-f270-41cb-969c-08aca208f5a9 |
public1 |
839dcf6c-97cd-4ed3-b241-e29079aff206 |
| e3f63530-2ddc-44bd-ad9b-0c9002ce6766 |
private |
aecb2758-ced7-4d43-8ab5-5d0f38dc68d1 |
| a91b3943-ac8b-41ca-9767-ad9cf2c1dc60 |
swarm-cluster-zhxyvth46o5c-fixed
network-xaz6nx43ec5e |
61789da1-17c9-431e-b728-22c4b923fd53 |
+--------------------------------------+--------------------
-----------------------------------+------------------------
--------------+

MAGNUM command

root@VFSR1:/var/log/magnum#
/opt/python-magnumclient/.magnumclient-env/bin/magnum service-list
+----+-----------------+------------------+-------+---------
-+-----------------+---------------------------+------------
---------------+
| id | host | binary | state | disabled |
disabledreason | createdat | updated_at |
+----+-----------------+------------------+-------+---------
-+-----------------+---------------------------+------------
---------------+
| 1 | VFSR1.svcmgr.io | magnum-conductor | up | |
- | 2016-09-30T05:24:19+00:00 | 2016-09-30T09:58:17+00:00 |
+----+-----------------+------------------+-------+---------
-+-----------------+---------------------------+------------
---------------+

root@VFSR1:/opt/mesosimage#
/opt/python-magnumclient/.magnumclient-env/bin/magnum
cluster-template-create --name mesos-cluster-template --image-id
ubuntu-mesos --keypair-id mykey --external-network-id public
--dns-nameserver 172.27.10.76 --flavor-id m1.small --coe mesos
ERROR: Method Not Allowed (HTTP 405) (Request-ID:
req-bb2bd994-7453-44e7-aa73-66b4fafa8796)
root@VFSR1:/opt/mesos
image#
/opt/python-magnumclient/.magnumclient-env/bin/magnum --debug
cluster-template-create --name mesos-cluster-template --image-id
ubuntu-mesos --keypair-id mykey --external-network-id public
--dns-nameserver 172.27.10.76 --flavor-id m1.small --coe mesos
DEBUG (extension:157) found extension EntryPoint.parse('v2token =
keystoneauth1.loading.plugins.identity.v2:Token')
DEBUG (extension:157) found extension EntryPoint.parse('v3oauth1 =
keystoneauth1.extras.oauth1.
loading:V3OAuth1')
DEBUG (extension:157) found extension EntryPoint.parse('admintoken =
keystoneauth1.loading.
plugins.admintoken:AdminToken')
DEBUG (extension:157) found extension EntryPoint.parse('v3oidcauthcode =
keystoneauth1.loading.
plugins.identity.v3:OpenIDConnectAuthorizationCode
')
DEBUG (extension:157) found extension EntryPoint.parse('v2password =
keystoneauth1.loading.plugins.identity.v2:Password')
DEBUG (extension:157) found extension EntryPoint.parse('v3samlpassword =
keystoneauth1.extras.
saml2.loading:Saml2Password')
DEBUG (extension:157) found extension EntryPoint.parse('v3password =
keystoneauth1.loading.
plugins.identity.v3:Password')
DEBUG (extension:157) found extension EntryPoint.parse('v3oidcaccesstoken
=
keystoneauth1.loading.plugins.identity.v3:OpenIDConnectAccessToken')
DEBUG (extension:157) found extension EntryPoint.parse('v3oidcpassword =
keystoneauth1.loading.
plugins.identity.v3:OpenIDConnectPassword')
DEBUG (extension:157) found extension EntryPoint.parse('v3kerberos =
keystoneauth1.extras.kerberos.loading:Kerberos')
DEBUG (extension:157) found extension EntryPoint.parse('token =
keystoneauth1.loading.
plugins.identity.generic:Token')
DEBUG (extension:157) found extension
EntryPoint.parse('v3oidcclientcredentials =
keystoneauth1.loading.plugins.identity.v3:OpenIDConnectClientCredentials
')
DEBUG (extension:157) found extension EntryPoint.parse('v3tokenlessauth =
keystoneauth1.loading.
plugins.identity.v3:TokenlessAuth')
DEBUG (extension:157) found extension EntryPoint.parse('v3token =
keystoneauth1.loading.plugins.identity.v3:Token')
DEBUG (extension:157) found extension EntryPoint.parse('v3totp =
keystoneauth1.loading.
plugins.identity.v3:TOTP')
DEBUG (extension:157) found extension EntryPoint.parse('password =
keystoneauth1.loading.plugins.identity.generic:Password')
DEBUG (extension:157) found extension EntryPoint.parse('v3fedkerb =
keystoneauth1.extras.kerberos.
loading:MappedKerberos')
DEBUG (session:337) REQ: curl -g -i -X GET http://VFSR1:35357/v3 -H
"Accept: application/json" -H "User-Agent: magnum keystoneauth1/2.12.1
python-requests/2.11.1 CPython/2.7.6"
INFO (connectionpool:214) Starting new HTTP connection (1): vfsr1
DEBUG (connectionpool:401) "GET /v3 HTTP/1.1" 200 245
DEBUG (session:366) RESP: [200] Date: Fri, 30 Sep 2016 10:15:11 GMT Server:
Apache/2.4.7 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu
x-openstack-request-id: req-560a5d64-5b95-4a4c-a3ad-ac3f7e040b1d
Content-Length: 245 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive
Content-Type: application/json
RESP BODY: {"version": {"status": "stable", "updated":
"2016-04-04T00:00:00Z", "media-types": [{"base": "application/json",
"type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.6",
"links": [{"href": "http://vfsr1:35357/v3/", "rel": "self"}]}}

DEBUG (base:165) Making authentication request to
http://vfsr1:35357/v3/auth/tokens
DEBUG (connectionpool:401) "POST /v3/auth/tokens HTTP/1.1" 201 5791
DEBUG (base:170) {"token": {"methods": ["password"], "roles": [{"id":
"20b84b42947e4605979a1616df15b1f9", "name": "admin"}], "expiresat":
"2016-09-30T11:15:11.706358Z", "project": {"domain": {"id":
"3eb00003f30847e6be79f1f7b1295276", "name": "default"}, "id":
"4db51f28e56742a6b62333cdd7ec890d", "name": "admin"}, "catalog":
[{"endpoints": [{"url": "http://VFSR1:8000/v1", "interface": "internal",
"region": "RegionOne", "region
id": "RegionOne", "id":
"4159e4f0becf45e5b22e0d7f79a3e8e1"}, {"url": "http://VFSR1:8000/v1",
"interface": "public", "region": "RegionOne", "regionid": "RegionOne",
"id": "939431700edc44fa9ba6178f65f4770b"}, {"url": "http://VFSR1:8000/v1",
"interface": "internal", "region": "RegionOne", "region
id": "RegionOne",
"id": "da519a3218374c0f9beeeaffb9dc183c"}], "type": "cloudformation",
"id":
"07a812778ce94168a12436b0b3ec9265", "name": "heat-cfn"}, {"endpoints":
[{"url": "http://VFSR1:8776/v1/4db51f28e56742a6b62333cdd7ec890d",
"interface": "admin", "region": "RegionOne", "regionid": "RegionOne",
"id": "7e23305b957a40aebdb89ccec9fd247a"}, {"url": "
http://VFSR1:8776/v1/4db51f28e56742a6b62333cdd7ec890d", "interface":
"internal", "region": "RegionOne", "region
id": "RegionOne", "id":
"84a255caa89f423e956882eab99b8926"}, {"url": "
http://VFSR1:8776/v1/4db51f28e56742a6b62333cdd7ec890d", "interface":
"public", "region": "RegionOne", "regionid": "RegionOne", "id":
"85e21af71ca648b590db6428ab7c3b97"}], "type": "volume", "id":
"211229f855b449d69460c1f63f2ac24f", "name": "cinder"}, {"endpoints":
[{"url": "http://VFSR1:8776/v2/4db51f28e56742a6b62333cdd7ec890d",
"interface": "internal", "region": "RegionOne", "region
id": "RegionOne",
"id": "22a08265fd164d509efc54ae4046f6ac"}, {"url": "
http://VFSR1:8776/v2/4db51f28e56742a6b62333cdd7ec890d", "interface":
"public", "region": "RegionOne", "regionid": "RegionOne", "id":
"89bad1f8756c4b68a9cd3b9d19beab65"}, {"url": "
http://VFSR1:8776/v2/4db51f28e56742a6b62333cdd7ec890d", "interface":
"admin", "region": "RegionOne", "region
id": "RegionOne", "id":
"c1276098101647b483e1d5fe7627c2cd"}], "type": "volumev2", "id":
"42f8cb97e4da4ed2b417aeb3c2b0a1ed", "name": "cinderv2"}, {"endpoints":
[{"url": "http://VFSR1:35357/v3", "interface": "admin", "region":
"RegionOne", "regionid": "RegionOne", "id":
"2e515fa624534b7e991bd7629bb99add"}, {"url": "http://VFSR1:5000/v3",
"interface": "public", "region": "RegionOne", "region
id": "RegionOne",
"id": "75edc1fdbace46a4832de17e4104c8d4"}, {"url": "http://VFSR1:5000/v3",
"interface": "internal", "region": "RegionOne", "regionid": "RegionOne",
"id": "f4c49c33c35f4107ad318ab122a494db"}], "type": "identity", "id":
"734adc8737a147889b6d4d9b84a2e53e", "name": "keystone"}, {"endpoints":
[{"url": "http://VFSR1:8774/v2.1/4db51f28e56742a6b62333cdd7ec890d",
"interface": "admin", "region": "RegionOne", "region
id": "RegionOne",
"id": "5d863fea75224f8e975de30b685720aa"}, {"url": "
http://VFSR1:8774/v2.1/4db51f28e56742a6b62333cdd7ec890d", "interface":
"public", "region": "RegionOne", "regionid": "RegionOne", "id":
"d07e368fe2ec46eda4308462ec0a17c6"}, {"url": "
http://VFSR1:8774/v2.1/4db51f28e56742a6b62333cdd7ec890d", "interface":
"internal", "region": "RegionOne", "region
id": "RegionOne", "id":
"feaf2d955112493695fc37dd6487852b"}], "type": "compute", "id":
"758ee85faf24404bb07a72f5d0230f4a", "name": "nova"}, {"endpoints":
[{"url":
"http://VFSR1:9511/v1", "interface": "public", "region": "RegionOne",
"regionid": "RegionOne", "id": "38a55b4f0ba148b8a373289bc99a759b"},
{"url": "http://VFSR1:9511/v1", "interface": "admin", "region":
"RegionOne", "region
id": "RegionOne", "id":
"3f57223d4d9a4c55adb6269903e7d54c"}, {"url": "http://VFSR1:9511/v1",
"interface": "internal", "region": "RegionOne", "regionid": "RegionOne",
"id": "403894a2863740b7b378f834dc5e1c94"}], "type": "container-infra",
"id": "89eef987c8d34793a519b732c5031277", "name": "magnum"}, {"endpoints":
[{"url": "http://VFSR1:9292", "interface": "admin", "region": "RegionOne",
"region
id": "RegionOne", "id": "0d5f2dc981e94c348cc7a16ddea9302d"},
{"url": "http://VFSR1:9292", "interface": "internal", "region":
"RegionOne", "regionid": "RegionOne", "id":
"6d7a7585dc1541ffbb1eb37d05549376"}, {"url": "http://VFSR1:9292",
"interface": "public", "region": "RegionOne", "region
id": "RegionOne",
"id": "f21462af5bfc450e8ee3038f6a3744e4"}], "type": "image", "id":
"b27c5b3779744f239675e8f990245e2b", "name": "glance"}, {"endpoints":
[{"url": "http://VFSR1:9696", "interface": "admin", "region": "RegionOne",
"regionid": "RegionOne", "id": "4f69c5610b594fed811233da612540d9"},
{"url": "http://VFSR1:9696", "interface": "public", "region": "RegionOne",
"region
id": "RegionOne", "id": "c6c62f290a074bf2987e20c2349fd41a"},
{"url": "http://VFSR1:9696", "interface": "internal", "region":
"RegionOne", "regionid": "RegionOne", "id":
"de6d308a61774c43a84eee665d379227"}], "type": "network", "id":
"c33e3e2572e345f2bdf037cb8fc3aaf1", "name": "neutron"}, {"endpoints":
[{"url": "http://VFSR1:8004/v1/4db51f28e56742a6b62333cdd7ec890d",
"interface": "admin", "region": "RegionOne", "region
id": "RegionOne",
"id": "614f00d7df0f4f13931d4c589b7cc342"}, {"url": "
http://VFSR1:8004/v1/4db51f28e56742a6b62333cdd7ec890d", "interface":
"internal", "region": "RegionOne", "regionid": "RegionOne", "id":
"cae0b628063645b2b29e4e2a7af4d63b"}, {"url": "
http://VFSR1:8004/v1/4db51f28e56742a6b62333cdd7ec890d", "interface":
"public", "region": "RegionOne", "region
id": "RegionOne", "id":
"d21e8bbcc97c40f48940bb106505eeb8"}], "type": "orchestration", "id":
"d763c396741747d6bceae777307e1432", "name": "heat"}], "user": {"domain":
{"id": "3eb00003f30847e6be79f1f7b1295276", "name": "default"}, "id":
"3bb731e1886347a19e90c06185be8a9c", "name": "admin"}, "auditids":
["BK5DedQERwC-k0-9tutp2Q"], "issued
at": "2016-09-30T10:15:11.000000Z"}}
DEBUG (session:337) REQ: curl -g -i -X POST
http://VFSR1:9511/v1/clustertemplates -H "OpenStack-API-Version:
container-infra latest" -H "X-Auth-Token:
{SHA1}1ec170abc088b6173e78d9dd29199f685d79f30f" -H "Content-Type:
application/json" -H "Accept: application/json" -H "User-Agent: None" -d
'{"labels": {}, "floatingipenabled": true, "fixedsubnet": null,
"master
flavorid": null, "noproxy": null, "httpsproxy": null,
"tls
disabled": false, "keypairid": "mykey", "public": false,
"http
proxy": null, "dockervolumesize": null, "servertype": "vm",
"external
networkid": "public", "imageid": "ubuntu-mesos",
"volumedriver": null, "registryenabled": false, "dockerstoragedriver":
"devicemapper", "name": "mesos-cluster-template", "networkdriver": null,
"fixed
network": null, "coe": "mesos", "flavorid": "m1.small",
"master
lbenabled": false, "dnsnameserver": "172.27.10.76"}'
INFO (connectionpool:214) Starting new HTTP connection (1): vfsr1
DEBUG (connectionpool:401) "POST /v1/clustertemplates HTTP/1.1" 405 117
DEBUG (session:366) RESP: [405] Date: Fri, 30 Sep 2016 10:15:11 GMT Server:
WSGIServer/0.1 Python/2.7.6 Allow: GET Content-Type: application/json
Content-Length: 117 x-openstack-request-id:
req-f9354287-afa1-46bd-af69-f4a20f9611b3
RESP BODY: {"errors": [{"status": 405, "code": "", "links": [], "title":
"Method Not Allowed", "detail": "", "request_id": ""}]}

DEBUG (shell:694) Method Not Allowed (HTTP 405) (Request-ID:
req-f9354287-afa1-46bd-af69-f4a20f9611b3)
Traceback (most recent call last):
File
"/opt/python-magnumclient/.magnumclient-env/local/lib/
python2.7/site-packages/magnumclient/shell.py",
line 691, in main
OpenStackMagnumShell().main(map(encodeutils.safedecode,
sys.argv[1:]))
File
"/opt/python-magnumclient/.magnumclient-env/local/lib/
python2.7/site-packages/magnumclient/shell.py",
line 633, in main
args.func(self.cs, args)
File
"/opt/python-magnumclient/.magnumclient-env/local/lib/
python2.7/site-packages/magnumclient/v1/cluster
templatesshell.py",
line 150, in do
clustertemplatecreate
clustertemplate = cs.clustertemplates.create(**opts)
File
"/opt/python-magnumclient/.magnumclient-env/local/lib/
python2.7/site-packages/magnumclient/v1/basemodels.py",
line 107, in create
return self.create(self.path(), new)
File
"/opt/python-magnumclient/.magnumclient-env/local/lib/
python2.7/site-packages/magnumclient/common/base.py",
line 49, in create
resp, body = self.api.json
request('POST', url, body=body)
File
"/opt/python-magnumclient/.magnumclient-env/local/lib/
python2.7/site-packages/magnumclient/common/httpclient.py",
line 366, in jsonrequest
resp = self.
httprequest(url, method, **kwargs)
File
"/opt/python-magnumclient/.magnumclient-env/local/lib/
python2.7/site-packages/magnumclient/common/httpclient.py",
line 350, in _http
request
error_json.get('debuginfo'), method, url)
MethodNotAllowed: Method Not Allowed (HTTP 405) (Request-ID:
req-f9354287-afa1-46bd-af69-f4a20f9611b3)
ERROR: Method Not Allowed (HTTP 405) (Request-ID:
req-f9354287-afa1-46bd-af69-f4a20f9611b3)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160930/8fd5a48a/attachment-0001.html>


Message: 15
Date: Fri, 30 Sep 2016 11:20:39 +0000
From: Dav?? ?rn J?hannsson davidoj@siminn.is
To: openstack mailing list openstack@lists.openstack.org
Subject: [Openstack] External network connectivity problem
Message-ID: 778759E9-C45D-41BB-8AD0-B0474C24FD77@siminn.is
Content-Type: text/plain; charset="utf-8"

OpenStack Liberty
Ubuntu 14.04

I have a little strange problem, I?m running a Swift cluster but the proxy
nodes reside in a OpenStack tenant. The private network of the tenant is
connected to a ha-router on the external storage network.

Now this used to work like a charm, where all my 3 proxy nodes within the
tenant were able to connect to the storage network and the ports on each of
the Swift nodes, but all of the sudden I lost the connectivity from 2 and
now if I spin up new instances within the project I can not connect to the
swift nodes, but still I can connect from this only proxy.

I can ping the swift nodes but can not connect to any open ports, [6000/2,
22, etc], here is where it gets a little I have a none swift node on the
network that I can connect to with out problems, the swift nodes are not
running a firewall.

root@swift-01:~# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

The nodes belong to the default security group which has the following
rules
Ingress IPv6 Any Any - default Delete Rule
Egress IPv4 Any Any 0.0.0.0/0 - Delete Rule
Egress IPv6 Any Any ::/0 - Delete Rule
Ingress IPv4 Any Any - default Delete Rule
Ingress IPv4 ICMP Any 0.0.0.0/0 - Delete Rule
Ingress IPv4 TCP 22 (SSH) 0.0.0.0/0 -

I created a new project and set up a router against the storage network in
the same manner as my previous project and instances within that project
can connect to ports on all servers running on the storage network.

On one of the network nodes I ran "ip netns exec
qrouter-dfa2bdc2-7482-42c4-b166-515849119428 bash? (the router in the
faulty project) and tried to ping and telnet to the ports on the swift
hosts without luck.

Any ideas on where to go next for troubleshooting ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160930/71d56684/attachment-0001.html>


Message: 16
Date: Fri, 30 Sep 2016 20:57:42 +0900
From: Akihiro Motoki amotoki@gmail.com
To: Kevin Benton kevin@benton.pub
Cc: "openstack@lists.openstack.org" openstack@lists.openstack.org
Subject: Re: [Openstack] neutron -f json output parsing
Message-ID:
<CALhU9tk6NMAwBZNHwNg2WaMemfzSzpZqBmiib5OE+p6Rv5Wh2w@mail.
gmail.com>
Content-Type: text/plain; charset="utf-8"

You hit https://bugs.launchpad.net/python-neutronclient/+bug/1524624.
The current neutronclient formats an object to string even if non-table
format is specified.
https://review.openstack.org/#/c/255696/ is the fix but it lacks reviewers
for 10 months :(

2016-09-30 18:16 GMT+09:00 Kevin Benton kevin@benton.pub:

Looks very much like a bug (double json encoding). File a bug on
launchpad
against python-neutronclient.

On Sep 30, 2016 05:10, "Bill Nerenberg" bill.nerenberg75@gmail.com
wrote:

Hi all

When I run neutron -f json in the command below, the pools value is
quoted so I cannot use it with JSON parsers like jq

For example, this is neutron's problematic output [1] (see the pools
line)

$ neutron lbaas-loadbalancer-show ef315cff-1d0a-4327-93c6-e9bf7e544e2d
-f json
{
"description": "",
"adminstateup": false,
"tenantid": "1bcf7ba13bcb496196d72f481bfebb5c",
"provisioning
status": "ACTIVE",
"vipsubnetid": "d02c8267-30be-4cdc-aa4a-a7c1ca6504b8",
"listeners": "",
"vipaddress": "10.0.2.160",
"vip
portid": "07227a77-1afe-466b-9d54-20e8637fc2b0",
"provider": "f5networks",
"pools": "{\"id\": \"1b792ace-0cbf-47cc-a3d5-2140c570ccee\"}",
"id": "ef315cff-1d0a-4327-93c6-e9bf7e544e2d",
"operating
status": "ONLINE",
"name": "test-lbaasv2"
}

Which triggers an error in jq (or other tools)

neutron lbaas-loadbalancer-show ef315cff-1d0a-4327-93c6-e9bf7e544e2d -f
json | jq ".pools.id"
jq: error: Cannot index string with string

If instead I use the followin JSON without the double quotes it works
just fine

$ cat myjson
{
"description": "",
"adminstateup": false,
"tenantid": "1bcf7ba13bcb496196d72f481bfebb5c",
"provisioning
status": "ACTIVE",
"vipsubnetid": "d02c8267-30be-4cdc-aa4a-a7c1ca6504b8",
"listeners": "",
"vipaddress": "10.0.2.160",
"vip
portid": "07227a77-1afe-466b-9d54-20e8637fc2b0",
"provider": "f5networks",
"pools": {"id": "1b792ace-0cbf-47cc-a3d5-2140c570ccee"},
"id": "ef315cff-1d0a-4327-93c6-e9bf7e544e2d",
"operating
status": "ONLINE",
"name": "test-lbaasv2"
}
$ cat myjson | jq ".pools.id"
"1b792ace-0cbf-47cc-a3d5-2140c570ccee"

Questions, questions...

Is it intentional the output of [1] or is it a bug? If it is not a bug
and it is intentional... how is it expected we should be parsing it?

Comments would be greatly appreciated

Many thanks


Mailing list: http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack/
attachments/20160930/3dbb55ec/attachment-0001.html>



Openstack mailing list
openstack@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

End of Openstack Digest, Vol 39, Issue 29



Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
asked Oct 1, 2016 in openstack by YOUDI (160 points)   1 2
retagged Jan 26, 2017 by admin
...