settingsLogin | Registersettings

[Openstack-operators] UDP Buffer Filling

0 votes

Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is filling
up and we begin dropping packets. Moving instances out of OpenStack onto
bare metal resolves the issue completely.

These instances are running asterisk which should be pulling these packets
off the queue but it appears to be falling behind no matter the resources
we give it.

We can't seem to pin down a reason why we would see this behavior in KVM
but not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
asked Jul 29, 2017 in openstack-operators by John_Petrini (1,880 points)   4 5

10 Responses

0 votes

Hi Pedro,

Thank you for the suggestion. I will look into this.

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [image:
Twitter] https://twitter.com/coredial [image: LinkedIn]
[image: Google Plus]
https://plus.google.com/104062177220750809525/posts [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 // *F: *215.297.4401 // *E: *
jpetrini@coredial.com

On Thu, Jul 27, 2017 at 12:25 PM, Pedro Sousa pgsousa@gmail.com wrote:

Hi,

have you considered to implement some network acceleration technique like
to OVS-DPDK or SR-IOV?

In these kind of workloads (voice, video) that have low latency
requirements you might need to use something like DPDK to avoid these
issues.

Regards

On Thu, Jul 27, 2017 at 4:49 PM, John Petrini jpetrini@coredial.com
wrote:

Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is filling
up and we begin dropping packets. Moving instances out of OpenStack onto
bare metal resolves the issue completely.

These instances are running asterisk which should be pulling these
packets off the queue but it appears to be falling behind no matter the
resources we give it.

We can't seem to pin down a reason why we would see this behavior in KVM
but not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jul 27, 2017 by John_Petrini (1,880 points)   4 5
0 votes

Hi John,

Did you know where the packet dropped? On physical interface / tap device / ovs port or in the vm.

We hit udp packet loss when the large pps. The following things you may double check:

  1. Double check if your physical interface is dropping packet. Usually if you rx queue ring size or rx queue number is default value ,it will drop udp packet . It will start to drop packet if it reach to about 200kpps in one cpu core(rss will distribute traffic to different core, for one single core, it will drop packet about 200kpps in my exp).

Usually you can get the statics from ethtool -S interface to check if there is packet loss because of rx queue full. And use ethtool to increase your ring size. I tested in my environment that if ring size increase from 512 to 4096, it can double the throughput from 200kpps to 400kpps in one cpu core. This may help in some case.

  1. Double check if your TAP device dropped packet, the default tx_queue length is 500 or 1000, increase it to 10000 may help in some case.

  2. Double check your nfconntrackmax in compute node and network node, the default value is 65535, in our case it usually reach to 500k-1m . we change it as following:
    net.netfilter.nfconntrackmax=10240000
    net.nfconntrackmax=10240000
    if you see , something like “nf_conntrack: table full, dropping packet” in your /var/log/message log, that means you hit this one.

  3. You could check if drop happened inside your vm, increase the following param maybe help in some case:

net.core.rmemmax / net.core.rmemdefault / net.core.wmemmax / net.core.rmemdefault

  1. If you are using default network driver(virtio-net), you can double check if your vhost of your vm is full with CPU soft irq. You can find it by the process name is vhost-$PIDOFYOUR_VM . In this case, if you can try the following feature in “L”:

https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html
multi-queue may help you some case, but it will use more vhost and more cpu in your host.

  1. Sometimes cpu numa pin can also help, but you need to reserve them and static plan you cpu.

I think we should figure out the packet lost in where and which is the bottleneck. Hope this help, John.
Thanks.

Regards,
Liping Mao

发件人: John Petrini jpetrini@coredial.com
日期: 2017年7月28日 星期五 03:35
至: Pedro Sousa pgsousa@gmail.com, OpenStack Mailing List openstack@lists.openstack.org, "openstack-operators@lists.openstack.org" openstack-operators@lists.openstack.org
主题: Re: [Openstack] [Openstack-operators] UDP Buffer Filling

Hi Pedro,

Thank you for the suggestion. I will look into this.

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [witter] https://twitter.com/coredial [inkedIn] [oogle Plus] https://plus.google.com/104062177220750809525/posts [log]
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 // F: 215.297.4401 // E: jpetrini@coredial.com

On Thu, Jul 27, 2017 at 12:25 PM, Pedro Sousa pgsousa@gmail.com wrote:
Hi,

have you considered to implement some network acceleration technique like to OVS-DPDK or SR-IOV?

In these kind of workloads (voice, video) that have low latency requirements you might need to use something like DPDK to avoid these issues.

Regards

On Thu, Jul 27, 2017 at 4:49 PM, John Petrini jpetrini@coredial.com wrote:
Hi List,

We are running Mitaka with VLAN provider networking. We've recently encountered a problem where the UDP receive queue on instances is filling up and we begin dropping packets. Moving instances out of OpenStack onto bare metal resolves the issue completely.

These instances are running asterisk which should be pulling these packets off the queue but it appears to be falling behind no matter the resources we give it.

We can't seem to pin down a reason why we would see this behavior in KVM but not on metal. I'm hoping someone on the list might have some insight or ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jul 28, 2017 by Liping_Mao_-X_(limao (1,580 points)   5
0 votes

I get my message has been automatically rejected by the openstack-operators-owner@lists.openstack.org.
Resend.

Hi John,

Did you know where the packet dropped? On physical interface / tap device / ovs port or in the vm.

We hit udp packet loss when the large pps. The following things you may double check:

  1. Double check if your physical interface is dropping packet. Usually if you rx queue ring size or rx queue number is default value ,it will drop udp packet . It will start to drop packet if it reach to about 200kpps in one cpu core(rss will distribute traffic to different core, for one single core, it will drop packet about 200kpps in my exp).

Usually you can get the statics from ethtool -S interface to check if there is packet loss because of rx queue full. And use ethtool to increase your ring size. I tested in my environment that if ring size increase from 512 to 4096, it can double the throughput from 200kpps to 400kpps in one cpu core. This may help in some case.

  1. Double check if your TAP device dropped packet, the default tx_queue length is 500 or 1000, increase it to 10000 may help in some case.

  2. Double check your nfconntrackmax in compute node and network node, the default value is 65535, in our case it usually reach to 500k-1m . we change it as following:
    net.netfilter.nfconntrackmax=10240000
    net.nfconntrackmax=10240000
    if you see , something like “nf_conntrack: table full, dropping packet” in your /var/log/message log, that means you hit this one.

  3. You could check if drop happened inside your vm, increase the following param maybe help in some case:

net.core.rmemmax / net.core.rmemdefault / net.core.wmemmax / net.core.rmemdefault

  1. If you are using default network driver(virtio-net), you can double check if your vhost of your vm is full with CPU soft irq. You can find it by the process name is vhost-$PIDOFYOUR_VM . In this case, if you can try the following feature in “L”:

https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html
multi-queue may help you some case, but it will use more vhost and more cpu in your host.

  1. Sometimes cpu numa pin can also help, but you need to reserve them and static plan you cpu.

I think we should figure out the packet lost in where and which is the bottleneck. Hope this help, John.
Thanks.

Regards,
Liping Mao

发件人: John Petrini jpetrini@coredial.com
日期: 2017年7月28日 星期五 03:35
至: Pedro Sousa pgsousa@gmail.com, OpenStack Mailing List openstack@lists.openstack.org, "openstack-operators@lists.openstack.org" openstack-operators@lists.openstack.org
主题: Re: [Openstack] [Openstack-operators] UDP Buffer Filling

Hi Pedro,

Thank you for the suggestion. I will look into this.

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [itter] https://twitter.com/coredial [nkedIn] [ogle Plus] https://plus.google.com/104062177220750809525/posts [og]
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 // F: 215.297.4401 // E: jpetrini@coredial.com

On Thu, Jul 27, 2017 at 12:25 PM, Pedro Sousa pgsousa@gmail.com wrote:
Hi,

have you considered to implement some network acceleration technique like to OVS-DPDK or SR-IOV?

In these kind of workloads (voice, video) that have low latency requirements you might need to use something like DPDK to avoid these issues.

Regards

On Thu, Jul 27, 2017 at 4:49 PM, John Petrini jpetrini@coredial.com wrote:
Hi List,

We are running Mitaka with VLAN provider networking. We've recently encountered a problem where the UDP receive queue on instances is filling up and we begin dropping packets. Moving instances out of OpenStack onto bare metal resolves the issue completely.

These instances are running asterisk which should be pulling these packets off the queue but it appears to be falling behind no matter the resources we give it.

We can't seem to pin down a reason why we would see this behavior in KVM but not on metal. I'm hoping someone on the list might have some insight or ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jul 28, 2017 by Liping_Mao_-X_(limao (1,580 points)   5
0 votes

Hello John,

a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.

check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like:

how many queues you have ??? Usually if you have only 1 or if the
parameter is missing completely is not good.

in Mitaka nova should use 1 queue for every instance CPU core you
have. It is worth to check if this is set correctly in your setup.

Cheers,

Saverio

2017-07-27 17:49 GMT+02:00 John Petrini jpetrini@coredial.com:

Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is filling up
and we begin dropping packets. Moving instances out of OpenStack onto bare
metal resolves the issue completely.

These instances are running asterisk which should be pulling these packets
off the queue but it appears to be falling behind no matter the resources we
give it.

We can't seem to pin down a reason why we would see this behavior in KVM but
not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jul 28, 2017 by Saverio_Proto (5,480 points)   1 3 6
0 votes

Hi Saverio,

Thanks for the info. The parameter is missing completely:

I've came across the blueprint for adding the image property
hwvifmultiqueue_enabled.
Do you know if this feature is available in Mitaka?

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [image:
Twitter] https://twitter.com/coredial [image: LinkedIn]
[image: Google Plus]
https://plus.google.com/104062177220750809525/posts [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 // *F: *215.297.4401 // *E: *
jpetrini@coredial.com

On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto zioproto@gmail.com wrote:

Hello John,

a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.

check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like:

<interface type='bridge'>
  <mac address='xx:xx:xx:xx:xx:xx'/>
  <source bridge='qbr5b3fc033-e2'/>
  <target dev='tap5b3fc033-e2'/>
  <model type='virtio'/>
  <driver name='vhost' queues='4'/>
  <alias name='net0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03'

function='0x0'/>

how many queues you have ??? Usually if you have only 1 or if the
parameter is missing completely is not good.

in Mitaka nova should use 1 queue for every instance CPU core you
have. It is worth to check if this is set correctly in your setup.

Cheers,

Saverio

2017-07-27 17:49 GMT+02:00 John Petrini jpetrini@coredial.com:

Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is
filling up
and we begin dropping packets. Moving instances out of OpenStack onto
bare
metal resolves the issue completely.

These instances are running asterisk which should be pulling these
packets
off the queue but it appears to be falling behind no matter the
resources we
give it.

We can't seem to pin down a reason why we would see this behavior in KVM
but
not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jul 28, 2017 by John_Petrini (1,880 points)   4 5
0 votes

On Jul 28, 2017 8:51 AM, "John Petrini" jpetrini@coredial.com wrote:

Hi Saverio,

Thanks for the info. The parameter is missing completely:

I've came across the blueprint for adding the image property
hwvifmultiqueue_enabled. Do you know if this feature is available in
Mitaka?

It was merged 2 years ago so should have been there since Liberty.

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [image:
Twitter] https://twitter.com/coredial [image: LinkedIn]
[image: Google Plus]
https://plus.google.com/104062177220750809525/posts [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 <(215)%20297-4400> // *F: *215.297.4401
<(215)%20297-4401> // *E: *jpetrini@coredial.com

On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto zioproto@gmail.com wrote:

Hello John,

a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.

check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like:

<interface type='bridge'>
  <mac address='xx:xx:xx:xx:xx:xx'/>
  <source bridge='qbr5b3fc033-e2'/>
  <target dev='tap5b3fc033-e2'/>
  <model type='virtio'/>
  <driver name='vhost' queues='4'/>
  <alias name='net0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03'

function='0x0'/>

how many queues you have ??? Usually if you have only 1 or if the
parameter is missing completely is not good.

in Mitaka nova should use 1 queue for every instance CPU core you
have. It is worth to check if this is set correctly in your setup.

Cheers,

Saverio

2017-07-27 17:49 GMT+02:00 John Petrini jpetrini@coredial.com:

Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is
filling up
and we begin dropping packets. Moving instances out of OpenStack onto
bare
metal resolves the issue completely.

These instances are running asterisk which should be pulling these
packets
off the queue but it appears to be falling behind no matter the
resources we
give it.

We can't seem to pin down a reason why we would see this behavior in KVM
but
not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jul 28, 2017 by Erik_McCormick (3,880 points)   2 3
0 votes

It is merged in Mitaka but your glance images must be decorated with:

hwvifmultiqueue_enabled='true'

when you do "openstack image show uuid"

in the property you should see this, and then you will have multiqueue

Saverio

2017-07-28 14:50 GMT+02:00 John Petrini jpetrini@coredial.com:

Hi Saverio,

Thanks for the info. The parameter is missing completely:

<interface type='bridge'>
  <mac address='fa:16:3e:19:3d:b8'/>
  <source bridge='qbrba20d1ab-30'/>
  <target dev='tapba20d1ab-30'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03'

function='0x0'/>

I've came across the blueprint for adding the image property
hwvifmultiqueue_enabled. Do you know if this feature is available in
Mitaka?

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [image:
Twitter] https://twitter.com/coredial [image: LinkedIn]
[image: Google Plus]
https://plus.google.com/104062177220750809525/posts [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 <(215)%20297-4400> // *F: *215.297.4401
<(215)%20297-4401> // *E: *jpetrini@coredial.com

On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto zioproto@gmail.com wrote:

Hello John,

a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.

check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like:

<interface type='bridge'>
  <mac address='xx:xx:xx:xx:xx:xx'/>
  <source bridge='qbr5b3fc033-e2'/>
  <target dev='tap5b3fc033-e2'/>
  <model type='virtio'/>
  <driver name='vhost' queues='4'/>
  <alias name='net0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03'

function='0x0'/>

how many queues you have ??? Usually if you have only 1 or if the
parameter is missing completely is not good.

in Mitaka nova should use 1 queue for every instance CPU core you
have. It is worth to check if this is set correctly in your setup.

Cheers,

Saverio

2017-07-27 17:49 GMT+02:00 John Petrini jpetrini@coredial.com:

Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is
filling up
and we begin dropping packets. Moving instances out of OpenStack onto
bare
metal resolves the issue completely.

These instances are running asterisk which should be pulling these
packets
off the queue but it appears to be falling behind no matter the
resources we
give it.

We can't seem to pin down a reason why we would see this behavior in
KVM but
not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jul 28, 2017 by Saverio_Proto (5,480 points)   1 3 6
0 votes

Hi Liping,

Thank you for the detailed response! I've gone over our environment and
checked the various values.

First I found that we are dropping packets on the physcial nics as well as
inside the instance (though only when its UDP receive buffer overflows).

Our physical nics are using the default ring size and our tap interfaces
are using the 500 tx_txdefault. There are dropped packets on the tap
interfaces but the counts are rather low and don't seem to increase very
often so I'm not sure that there's a problem there but I'm considering
adjusting the value anyway to avoid issues in the future.

We already tune these values in the VM. Would you suggest tuning them on
the compute nodes as well?
net.core.rmemmax / net.core.rmemdefault / net.core.wmemmax /
net.core.rmem
default

I'm going to do some testing with multiqueues enabled since both you and
Saverio have suggested it.

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [image:
Twitter] https://twitter.com/coredial [image: LinkedIn]
[image: Google Plus]
https://plus.google.com/104062177220750809525/posts [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 // *F: *215.297.4401 // *E: *
jpetrini@coredial.com

On Fri, Jul 28, 2017 at 9:25 AM, Erik McCormick emccormick@cirrusseven.com
wrote:

On Jul 28, 2017 8:51 AM, "John Petrini" jpetrini@coredial.com wrote:

Hi Saverio,

Thanks for the info. The parameter is missing completely:

<interface type='bridge'>
  <mac address='fa:16:3e:19:3d:b8'/>
  <source bridge='qbrba20d1ab-30'/>
  <target dev='tapba20d1ab-30'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03'

function='0x0'/>

I've came across the blueprint for adding the image property
hwvifmultiqueue_enabled. Do you know if this feature is available in
Mitaka?

It was merged 2 years ago so should have been there since Liberty.

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [image:
Twitter] https://twitter.com/coredial [image: LinkedIn]
[image: Google Plus]
https://plus.google.com/104062177220750809525/posts [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 <(215)%20297-4400> // *F: *215.297.4401
<(215)%20297-4401> // *E: *jpetrini@coredial.com

On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto zioproto@gmail.com wrote:

Hello John,

a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.

check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like:

<interface type='bridge'>
  <mac address='xx:xx:xx:xx:xx:xx'/>
  <source bridge='qbr5b3fc033-e2'/>
  <target dev='tap5b3fc033-e2'/>
  <model type='virtio'/>
  <driver name='vhost' queues='4'/>
  <alias name='net0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03'

function='0x0'/>

how many queues you have ??? Usually if you have only 1 or if the
parameter is missing completely is not good.

in Mitaka nova should use 1 queue for every instance CPU core you
have. It is worth to check if this is set correctly in your setup.

Cheers,

Saverio

2017-07-27 17:49 GMT+02:00 John Petrini jpetrini@coredial.com:

Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is
filling up
and we begin dropping packets. Moving instances out of OpenStack onto
bare
metal resolves the issue completely.

These instances are running asterisk which should be pulling these
packets
off the queue but it appears to be falling behind no matter the
resources we
give it.

We can't seem to pin down a reason why we would see this behavior in
KVM but
not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jul 28, 2017 by John_Petrini (1,880 points)   4 5
0 votes

We already tune these values in the VM. Would you suggest tuning them on the compute nodes as well?
No need on compute nodes.(AFAIK)

How much pps your vm need to handle?
You can monitor CPU usage ,especially si to see where may drop. If you see vhost almost reach to 100% CPU ,multi queue may help in some case.

Thanks.

Regards,
Liping Mao

在 2017年7月28日,22:45,John Petrini jpetrini@coredial.com 写道:

We already tune these values in the VM. Would you suggest tuning them on the compute nodes as well?


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jul 29, 2017 by Liping_Mao_-X_(limao (1,580 points)   5
0 votes

John,

multiqueue support will require qemu 2.5+
I wonder why do you need this feature. It only will help in case of a
really huge incoming pps or bandwidth.
I'm not sure udp packet loss can be solved with this, but of course better
try.

my 2c.

Thanks,
Eugene.

On Fri, Jul 28, 2017 at 5:00 PM, Liping Mao (limao) limao@cisco.com wrote:

We already tune these values in the VM. Would you suggest tuning them on
the compute nodes as well?
No need on compute nodes.(AFAIK)

How much pps your vm need to handle?
You can monitor CPU usage ,especially si to see where may drop. If you see
vhost almost reach to 100% CPU ,multi queue may help in some case.

Thanks.

Regards,
Liping Mao

在 2017年7月28日,22:45,John Petrini jpetrini@coredial.com 写道:

We already tune these values in the VM. Would you suggest tuning them on
the compute nodes as well?


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jul 29, 2017 by Eugene_Nikanorov (7,480 points)   1 3 7
...