settingsLogin | Registersettings

[Openstack] Network speed issue

0 votes

Hi all!

In my OpenStack installation (Icehouse and use nova legacy networking)
the VMs are talking to each other over a 1Gbps network link.

My issue is that although file transfers between physical (hypervisor)
nodes can saturate that link transfers between VMs reach very lower
speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scp'ing a large image file (approx. 4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the results remain
the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps link?

Best regards,

George


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
asked Dec 16, 2014 in openstack by Georgios_Dimitrakaki (5,080 points)   3 11 16
retagged Feb 25, 2015 by admin

21 Responses

0 votes

Disable offloading on the nodes with: ethtool -K interfaceName gro off gso
off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis" giorgis@acmac.uoc.gr
escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy networking) the
VMs are talking to each other over a 1Gbps network link.

My issue is that although file transfers between physical (hypervisor)
nodes can saturate that link transfers between VMs reach very lower speeds
e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scp'ing a large image file (approx. 4GB) between
the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the results remain
the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps link?

Best regards,

George


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Adrián_Norte (860 points)   1
0 votes

I believe that they are already disabled.

Here is the ethtool output:

# ethtool --show-offload eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off
tx-checksum-unneeded: off
tx-checksum-ip-generic: on
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Regards,

George

Disable offloading on the nodes with: ethtool -K interfaceName gro
off
gso off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis" escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy
networking) the VMs are talking to each other over a 1Gbps network
link.

My issue is that although file transfers between physical
(hypervisor) nodes can saturate that link transfers between VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scping a large image file (approx. 4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps link?

Best regards,

George


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
Post to     : openstack@lists.openstack.org [2]
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] mailto:giorgis@acmac.uoc.gr


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Georgios_Dimitrakaki (5,080 points)   3 11 16
0 votes

That shows that those 3 offload settibgs are enabled.
El 16/12/2014 19:01, "Georgios Dimitrakakis" giorgis@acmac.uoc.gr
escribió:

I believe that they are already disabled.

Here is the ethtool output:

ethtool --show-offload eth1

Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off
tx-checksum-unneeded: off
tx-checksum-ip-generic: on
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Regards,

George

Disable offloading on the nodes with: ethtool -K interfaceName gro off

gso off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis" escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy
networking) the VMs are talking to each other over a 1Gbps network
link.

My issue is that although file transfers between physical
(hypervisor) nodes can saturate that link transfers between VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scping a large image file (approx. 4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps link?

Best regards,

George


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
Post to : openstack@lists.openstack.org [2]
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] mailto:giorgis@acmac.uoc.gr


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Adrián_Norte (860 points)   1
0 votes

On 12/16/2014 09:38 AM, Adrián Norte Fernández wrote:
Disable offloading on the nodes with: ethtool -K interfaceName gro off
gso off tso off

And then try it again

That should only make things better if there was some sort of actual
functional problem no?

When diagnosing "network" performance issues I like to get other things
out of the way - so get rid of filesystems, encryption, etc. To that
end I would suggest running a basic network benchmark. I have a natural
bias towards netperf of course :) http://www.netperf.org/ but even
iperf would suffice.

I've also found (based on some ancient work back in the whatever the "D"
release was called time frame that emulated NICs (in my case the
realtek) are much slower than the virt_io "NIC." In the case of the
realteck emulation at least, my recollection was the maximum I got out
of an instance was on the order of about 250ish Mbit/s as it happens...

happy benchmarking,

rick jones

El 16/12/2014 18:36, "Georgios Dimitrakakis" <giorgis@acmac.uoc.gr
giorgis@acmac.uoc.gr> escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy
networking) the VMs are talking to each other over a 1Gbps network link.

My issue is that although file transfers between physical
(hypervisor) nodes can saturate that link transfers between VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scp'ing a large image file (approx. 4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps link?


Best regards,


George

_________________________________________________
Mailing list:
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack

Post to     : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Rick_Jones (1,240 points)   1 2
0 votes

Ooops...It seems that I have been confused..

The pasted part is indeed from the node when I was looking somewhere
else....

Thanks a lot for noticing that Adrian!!!!

I will turn it off on the nodes and test again!

Should it be off on both the nodes and the VMs?

Regards,

George

That shows that those 3 offload settibgs are enabled.
El 16/12/2014 19:01, "Georgios Dimitrakakis" escribió:

I believe that they are already disabled.

Here is the ethtool output:

ethtool --show-offload eth1

Features for eth1:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: off
        tx-checksum-unneeded: off
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off
        tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Regards,

George

Disable offloading on the nodes with: ethtool -K interfaceName gro
off
gso off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis"  escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy
networking) the VMs are talking to each other over a 1Gbps
network
link.

My issue is that although file transfers between physical
(hypervisor) nodes can saturate that link transfers between VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scping a large image file (approx.
4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the
results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps link?

Best regards,

George


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[1] [1]
Post to     : openstack@lists.openstack.org [2] [2]
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[3] [3]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4]
[2] mailto:openstack@lists.openstack.org [5]
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[6]
[4] mailto:giorgis@acmac.uoc.gr [7]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[5] mailto:openstack@lists.openstack.org
[6] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[7] mailto:giorgis@acmac.uoc.gr
[8] mailto:giorgis@acmac.uoc.gr


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Georgios_Dimitrakaki (5,080 points)   3 11 16
0 votes

Disabling it only on the nodes should boost the speed but disabling it in
the vms too improves greatly the speed
El 16/12/2014 19:13, "Georgios Dimitrakakis" giorgis@acmac.uoc.gr
escribió:

Ooops...It seems that I have been confused..

The pasted part is indeed from the node when I was looking somewhere
else....

Thanks a lot for noticing that Adrian!!!!

I will turn it off on the nodes and test again!

Should it be off on both the nodes and the VMs?

Regards,

George

That shows that those 3 offload settibgs are enabled.

El 16/12/2014 19:01, "Georgios Dimitrakakis" escribió:

I believe that they are already disabled.

Here is the ethtool output:

ethtool --show-offload eth1

Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off
tx-checksum-unneeded: off
tx-checksum-ip-generic: on
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Regards,

George

Disable offloading on the nodes with: ethtool -K interfaceName gro

off
gso off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis" escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy
networking) the VMs are talking to each other over a 1Gbps
network
link.

My issue is that although file transfers between physical
(hypervisor) nodes can saturate that link transfers between VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scping a large image file (approx.
4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the
results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps link?

Best regards,

George


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[1] [1]
Post to : openstack@lists.openstack.org [2] [2]
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[3] [3]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4]
[2] mailto:openstack@lists.openstack.org [5]
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[6]
[4] mailto:giorgis@acmac.uoc.gr [7]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[5] mailto:openstack@lists.openstack.org
[6] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[7] mailto:giorgis@acmac.uoc.gr
[8] mailto:giorgis@acmac.uoc.gr


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Adrián_Norte (860 points)   1
0 votes

I have changed that on both the node and the VMs and actually made
things worse.

I did that on both eth1 and br100 interfaces on the physical node.

The transfer speed now is 15MB/s half of the one I had before!

Have I missed something? I believe that this is not an expected
behaviour?

Here are the outputs for both br100 and eth1 on the node in case I have
missed something:

# ethtool -k br100
Features for br100:
rx-checksumming: off [fixed]
tx-checksumming: on
tx-checksum-ipv4: off
tx-checksum-unneeded: off
tx-checksum-ip-generic: on
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: on [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: on
tx-tcp6-segmentation: on
udp-fragmentation-offload: on [fixed]
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: on [fixed]
netns-local: on [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on [fixed]
tx-udp_tnl-segmentation: on [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

# ethtool -k eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off
tx-checksum-unneeded: off
tx-checksum-ip-generic: on
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off
tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Best,

George

Disabling it only on the nodes should boost the speed but disabling
it
in the vms too improves greatly the speed
El 16/12/2014 19:13, "Georgios Dimitrakakis" escribió:

Ooops...It seems that I have been confused..

The pasted part is indeed from the node when I was looking
somewhere else....

Thanks a lot for noticing that Adrian!!!!

I will turn it off on the nodes and test again!

Should it be off on both the nodes and the VMs?

Regards,

George

That shows that those 3 offload settibgs are enabled.
El 16/12/2014 19:01, "Georgios Dimitrakakis"  escribió:

I believe that they are already disabled.

Here is the ethtool output:

ethtool --show-offload eth1

Features for eth1:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: off
        tx-checksum-unneeded: off
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off
        tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Regards,

George

Disable offloading on the nodes with: ethtool -K interfaceName
gro
off
gso off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis"  escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy
networking) the VMs are talking to each other over a 1Gbps
network
link.

My issue is that although file transfers between physical
(hypervisor) nodes can saturate that link transfers between
VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scping a large image file
(approx.
4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the
results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it
play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps
link?

Best regards,

George


Mailing list:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[1]
[1] [1]
Post to     : openstack@lists.openstack.org [2] [2] [2]
Unsubscribe :

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[3]
[3] [3]

Links:


[1]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4]
[4]
[2] mailto:openstack@lists.openstack.org [5] [5]
[3]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[6]
[6]
[4] mailto:giorgis@acmac.uoc.gr [7] [7]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[8]
[2] mailto:openstack@lists.openstack.org [9]
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[10]
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[11]
[5] mailto:openstack@lists.openstack.org [12]
[6] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[13]
[7] mailto:giorgis@acmac.uoc.gr [14]
[8] mailto:giorgis@acmac.uoc.gr [15]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[5] mailto:openstack@lists.openstack.org
[6] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[7] mailto:giorgis@acmac.uoc.gr
[8] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[9] mailto:openstack@lists.openstack.org
[10] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[12] mailto:openstack@lists.openstack.org
[13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[14] mailto:giorgis@acmac.uoc.gr
[15] mailto:giorgis@acmac.uoc.gr
[16] mailto:giorgis@acmac.uoc.gr


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Georgios_Dimitrakaki (5,080 points)   3 11 16
0 votes

Try enabling the gso and the tso but keeping the gro disabled
El 16/12/2014 19:38, "Georgios Dimitrakakis" giorgis@acmac.uoc.gr
escribió:

I have changed that on both the node and the VMs and actually made things
worse.

I did that on both eth1 and br100 interfaces on the physical node.

The transfer speed now is 15MB/s half of the one I had before!

Have I missed something? I believe that this is not an expected behaviour?

Here are the outputs for both br100 and eth1 on the node in case I have
missed something:

ethtool -k br100

Features for br100:
rx-checksumming: off [fixed]
tx-checksumming: on
tx-checksum-ipv4: off
tx-checksum-unneeded: off
tx-checksum-ip-generic: on
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: on [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: on
tx-tcp6-segmentation: on
udp-fragmentation-offload: on [fixed]
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: on [fixed]
netns-local: on [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on [fixed]
tx-udp_tnl-segmentation: on [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

ethtool -k eth1

Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off
tx-checksum-unneeded: off
tx-checksum-ip-generic: on
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off
tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Best,

George

Disabling it only on the nodes should boost the speed but disabling it

in the vms too improves greatly the speed
El 16/12/2014 19:13, "Georgios Dimitrakakis" escribió:

Ooops...It seems that I have been confused..

The pasted part is indeed from the node when I was looking
somewhere else....

Thanks a lot for noticing that Adrian!!!!

I will turn it off on the nodes and test again!

Should it be off on both the nodes and the VMs?

Regards,

George

That shows that those 3 offload settibgs are enabled.

El 16/12/2014 19:01, "Georgios Dimitrakakis" escribió:

I believe that they are already disabled.

Here is the ethtool output:

ethtool --show-offload eth1

Features for eth1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off
tx-checksum-unneeded: off
tx-checksum-ip-generic: on
tx-checksum-ipv6: off
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Regards,

George

Disable offloading on the nodes with: ethtool -K interfaceName

gro
off
gso off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis" escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy
networking) the VMs are talking to each other over a 1Gbps
network
link.

My issue is that although file transfers between physical
(hypervisor) nodes can saturate that link transfers between
VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scping a large image file
(approx.
4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the
results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it
play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps
link?

Best regards,

George


Mailing list:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[1]
[1] [1]
Post to : openstack@lists.openstack.org [2] [2] [2]
Unsubscribe :

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[3]
[3] [3]

Links:


[1]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4]
[4]
[2] mailto:openstack@lists.openstack.org [5] [5]
[3]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[6]
[6]
[4] mailto:giorgis@acmac.uoc.gr [7] [7]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[8]
[2] mailto:openstack@lists.openstack.org [9]
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[10]
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[11]
[5] mailto:openstack@lists.openstack.org [12]
[6] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[13]
[7] mailto:giorgis@acmac.uoc.gr [14]
[8] mailto:giorgis@acmac.uoc.gr [15]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[5] mailto:openstack@lists.openstack.org
[6] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[7] mailto:giorgis@acmac.uoc.gr
[8] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[9] mailto:openstack@lists.openstack.org
[10] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[12] mailto:openstack@lists.openstack.org
[13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[14] mailto:giorgis@acmac.uoc.gr
[15] mailto:giorgis@acmac.uoc.gr
[16] mailto:giorgis@acmac.uoc.gr


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Adrián_Norte (860 points)   1
0 votes

Changing

gso on
tso on
gro off

got me back to the initial status.

Although now it starts with approximately 65-70MB/s for a few seconds
but then it drops down to 30MB/s

Regards,

George

Try enabling the gso and the tso but keeping the gro disabled
El 16/12/2014 19:38, "Georgios Dimitrakakis" escribió:

I have changed that on both the node and the VMs and actually made
things worse.

I did that on both eth1 and br100 interfaces on the physical node.

The transfer speed now is 15MB/s half of the one I had before!

Have I missed something? I believe that this is not an expected
behaviour?

Here are the outputs for both br100 and eth1 on the node in case I
have missed something:

ethtool -k br100

Features for br100:
rx-checksumming: off [fixed]
tx-checksumming: on
        tx-checksum-ipv4: off
        tx-checksum-unneeded: off
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: on [fixed]
tcp-segmentation-offload: off
        tx-tcp-segmentation: off
        tx-tcp-ecn-segmentation: on
        tx-tcp6-segmentation: on
udp-fragmentation-offload: on [fixed]
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: on [fixed]
netns-local: on [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on [fixed]
tx-udp_tnl-segmentation: on [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

ethtool -k eth1

Features for eth1:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: off
        tx-checksum-unneeded: off
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
        tx-tcp-segmentation: off
        tx-tcp-ecn-segmentation: off
        tx-tcp6-segmentation: off
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Best,

George

Disabling it only on the nodes should boost the speed but
disabling it
in the vms too improves greatly the speed
El 16/12/2014 19:13, "Georgios Dimitrakakis"  escribió:

Ooops...It seems that I have been confused..

The pasted part is indeed from the node when I was looking
somewhere else....

Thanks a lot for noticing that Adrian!!!!

I will turn it off on the nodes and test again!

Should it be off on both the nodes and the VMs?

Regards,

George

That shows that those 3 offload settibgs are enabled.
El 16/12/2014 19:01, "Georgios Dimitrakakis"  escribió:

I believe that they are already disabled.

Here is the ethtool output:

ethtool --show-offload eth1

Features for eth1:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: off
        tx-checksum-unneeded: off
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off
        tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Regards,

George

Disable offloading on the nodes with: ethtool -K
interfaceName
gro
off
gso off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis"  escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova
legacy
networking) the VMs are talking to each other over a
1Gbps
network
link.

My issue is that although file transfers between
physical
(hypervisor) nodes can saturate that link transfers
between
VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scping a large image file
(approx.
4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but
the
results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does
it
play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the
1Gbps
link?

Best regards,

George


Mailing list:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[2]

[1]
[1] [1]
Post to     : openstack@lists.openstack.org [1] [2]
[2] [2]
Unsubscribe :

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[3]

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[4]
[4]
[2] mailto:openstack@lists.openstack.org [4] [5] [5]
[3]

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[5]
[6]
[6]
[4] mailto:giorgis@acmac.uoc.gr [6] [7] [7]

Links:


[1]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[7]
[8]
[2] mailto:openstack@lists.openstack.org [8] [9]
[3]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[9]
[10]
[4]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[10]
[11]
[5] mailto:openstack@lists.openstack.org [11] [12]
[6]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[12]
[13]
[7] mailto:giorgis@acmac.uoc.gr [13] [14]
[8] mailto:giorgis@acmac.uoc.gr [14] [15]

Links:


[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[15]
[2] mailto:openstack@lists.openstack.org [16]
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[17]
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[18]
[5] mailto:openstack@lists.openstack.org [19]
[6] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[20]
[7] mailto:giorgis@acmac.uoc.gr [21]
[8] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[22]
[9] mailto:openstack@lists.openstack.org [23]
[10]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [24]
[11]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [25]
[12] mailto:openstack@lists.openstack.org [26]
[13]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [27]
[14] mailto:giorgis@acmac.uoc.gr [28]
[15] mailto:giorgis@acmac.uoc.gr [29]
[16] mailto:giorgis@acmac.uoc.gr [30]

Links:


[1] mailto:openstack@lists.openstack.org
[2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] mailto:openstack@lists.openstack.org
[5] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[6] mailto:giorgis@acmac.uoc.gr
[7] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[8] mailto:openstack@lists.openstack.org
[9] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[10] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[11] mailto:openstack@lists.openstack.org
[12] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[13] mailto:giorgis@acmac.uoc.gr
[14] mailto:giorgis@acmac.uoc.gr
[15] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[16] mailto:openstack@lists.openstack.org
[17] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[18] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[19] mailto:openstack@lists.openstack.org
[20] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[21] mailto:giorgis@acmac.uoc.gr
[22] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[23] mailto:openstack@lists.openstack.org
[24] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[25] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[26] mailto:openstack@lists.openstack.org
[27] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[28] mailto:giorgis@acmac.uoc.gr
[29] mailto:giorgis@acmac.uoc.gr
[30] mailto:giorgis@acmac.uoc.gr
[31] mailto:giorgis@acmac.uoc.gr

--


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Dec 16, 2014 by Georgios_Dimitrakaki (5,080 points)   3 11 16
0 votes

On 12/16/2014 11:09 AM, Georgios Dimitrakakis wrote:
Changing

gso on
tso on
gro off

got me back to the initial status.

Although now it starts with approximately 65-70MB/s for a few seconds
but then it drops down to 30MB/s

What do you see if you use a "pure" networking benchmark such as netperf
or iperf?

responded Dec 16, 2014 by Rick_Jones (1,240 points)   1 2
...