settingsLogin | Registersettings

[openstack-dev] [nova] Discussions for DPDK support in OpenStack

0 votes

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about DPDK
support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd like
to start working on Openstack (ML2 driver) to develop "networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used to
create "dpdkr" interface in ovs-dpdk[2].

We have issued a blueprint[3] for that use case.

As we are attending Boston Summit, could you have a discussion with us
at the Summit?

[1] http://www.dpdk.org/browse/apps/spp/
[2]
http://openvswitch.org/support/dist-docs-2.5/INSTALL.DPDK.md.html#L446-L490
[3] https://blueprints.launchpad.net/nova/+spec/libvirt-options-for-dpdk

Sincerely,

--
Tetsuro Nakamura nakamura.tetsuro@lab.ntt.co.jp
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Jul 31, 2017 in openstack-dev by TETSURO_NAKAMURA (320 points)   1

12 Responses

0 votes

Get information that DPDK removed ivshmem
from http://dpdk.org/ml/archives/dev/2016-July/044552.html

-----Original Message-----
From: TETSURO NAKAMURA [mailto:nakamura.tetsuro@lab.ntt.co.jp]
Sent: Friday, April 28, 2017 12:23 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] Discussions for DPDK support in OpenStack

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about DPDK support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd like to start working on Openstack (ML2 driver) to develop "networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used to create "dpdkr" interface in ovs-dpdk[2].

We have issued a blueprint[3] for that use case.

As we are attending Boston Summit, could you have a discussion with us at the Summit?

[1] http://www.dpdk.org/browse/apps/spp/
[2]
http://openvswitch.org/support/dist-docs-2.5/INSTALL.DPDK.md.html#L446-L490
[3] https://blueprints.launchpad.net/nova/+spec/libvirt-options-for-dpdk

Sincerely,

--
Tetsuro Nakamura nakamura.tetsuro@lab.ntt.co.jp NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International) 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 28, 2017 by Guo,_Ruijing (1,060 points)   1 4 7
0 votes

On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:
Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about
DPDK support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd
like to start working on Openstack (ML2 driver) to develop
"networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used
to create "dpdkr" interface in ovs-dpdk[2].

To the best of my knowledge, IVSHMEM ports are no longer supported in
upstream. The documentation for this feature was recently removed from
OVS [1] stating:

  • The ivshmem library has been removed in DPDK since DPDK 16.11.
  • The instructions/scheme provided will not work with current
      supported and future DPDK versions.
  • The linked patch needed to enable support in QEMU has never
      been upstreamed and does not apply to the last 4 QEMU releases.
  • Userspace vhost has become the defacto OVS-DPDK path to the guest.

Note: I worked on DPDK vSwitch [2] way back when, and there were severe
security implications with sharing a chunk of host memory between
multiple guests (which is how IVSHMEM works). I'm not at all surprised
the feature was killed.

We have issued a blueprint[3] for that use case.

Per above, I don't think this is necessary. vhost-user ports already
work as expected in nova.

As we are attending Boston Summit, could you have a discussion with
us at the Summit?

I'll be around the summit (IRC: sfinucan) if you want to chat more.
However, I'd suggest reaching out to Sean Mooney or Igor Duarte Cardoso
(both CCd) if you want further information about general support of
OVS-DPDK in OpenStack and DPDK acceleration in SFC, respectively. I'd
also suggest looking at networking-ovs-dpdk [3] which contains a lot of
helper tools for using OVS-DPDK in OpenStack, along with links to a
Brighttalk video I recently gave regarding the state of OVS-DPDK in
OpenStack.

Hope this helps,
Stephen

[1] https://github.com/openvswitch/ovs/commit/90ca71dd317fea1ccf0040389
dae895aa7b2b561
[2] https://github.com/01org/dpdk-ovs
[3] https://github.com/openstack/networking-ovs-dpdk


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 28, 2017 by Stephen_Finucane (1,620 points)   3
0 votes

Thank you for reply!

On 2017/04/28 4:38, sfinucan@redhat.com wrote:
On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about
DPDK support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd
like to start working on Openstack (ML2 driver) to develop
"networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used
to create "dpdkr" interface in ovs-dpdk[2].

To the best of my knowledge, IVSHMEM ports are no longer supported in
upstream. The documentation for this feature was recently removed from
OVS [1] stating:

  • The ivshmem library has been removed in DPDK since DPDK 16.11.
  • The instructions/scheme provided will not work with current
    supported and future DPDK versions.
  • The linked patch needed to enable support in QEMU has never
    been upstreamed and does not apply to the last 4 QEMU releases.
  • Userspace vhost has become the defacto OVS-DPDK path to the guest.

Note: I worked on DPDK vSwitch [2] way back when, and there were severe
security implications with sharing a chunk of host memory between
multiple guests (which is how IVSHMEM works). I'm not at all surprised
the feature was killed.

We have issued a blueprint[3] for that use case.

Per above, I don't think this is necessary. vhost-user ports already
work as expected in nova.

Yes, IVSHMEM is a critical issue for multitenancy.
Still, we'd like to state that there is a use case for private cloud
such as carrier NFV in which sharing host memory doesn't become a
critical issue. In that use case we'd like to use ivshmem for good
performance.

As we are attending Boston Summit, could you have a discussion with
us at the Summit?

I'll be around the summit (IRC: sfinucan) if you want to chat more.
However, I'd suggest reaching out to Sean Mooney or Igor Duarte Cardoso
(both CCd) if you want further information about general support of
OVS-DPDK in OpenStack and DPDK acceleration in SFC, respectively. I'd
also suggest looking at networking-ovs-dpdk [3] which contains a lot of
helper tools for using OVS-DPDK in OpenStack, along with links to a
Brighttalk video I recently gave regarding the state of OVS-DPDK in
OpenStack.

Thank you very much for information !
I'm already reaching out to Sean Mooney in another case,
so I try to reach out to Igor Duarte Cardoso-san.

Hope this helps,
Stephen

[1] https://github.com/openvswitch/ovs/commit/90ca71dd317fea1ccf0040389
dae895aa7b2b561
[2] https://github.com/01org/dpdk-ovs
[3] https://github.com/openstack/networking-ovs-dpdk


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Tetsuro Nakamura nakamura.tetsuro@lab.ntt.co.jp
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 8, 2017 by TETSURO_NAKAMURA (320 points)   1
0 votes

On Fri, Apr 28, 2017 at 09:38:38AM +0100, sfinucan@redhat.com wrote:
On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about
DPDK support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd
like to start working on Openstack (ML2 driver) to develop
"networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used
to create "dpdkr" interface in ovs-dpdk[2].

To the best of my knowledge, IVSHMEM ports are no longer supported in
upstream. The documentation for this feature was recently removed from
OVS [1] stating:

  • The ivshmem library has been removed in DPDK since DPDK 16.11.
  • The instructions/scheme provided will not work with current
      supported and future DPDK versions.
  • The linked patch needed to enable support in QEMU has never
      been upstreamed and does not apply to the last 4 QEMU releases.
  • Userspace vhost has become the defacto OVS-DPDK path to the guest.

Note: I worked on DPDK vSwitch [2] way back when, and there were severe
security implications with sharing a chunk of host memory between
multiple guests (which is how IVSHMEM works). I'm not at all surprised
the feature was killed.

Security is only one of the issues. Upstream QEMU maintainers considered
the ivshmem device to have a seriously flawed design and discourage anyone
from using it. For anything network related QEMU maintainers strongly
recommand using vhost-user.

IIUC, there is some experimental work to create a virtio based replacement
for ivshmem, for non-network related vm-2-vm communications, but that is
not going to be something usable for a while yet. This however just
reinforces the point that ivshmem is considered obsolete / flawed
technology by QEMU maintainers.

Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 8, 2017 by Daniel_P._Berrange (27,920 points)   2 4 9
0 votes

Thank you for information !

So, you mean the situation is not changed from the reference thread[1]
in qemu-devel ML 3 years ago, and the difficulties of ivshmem you
mentioned is described in this thread. Am I right?

[1] "[Qemu-devel] Why I advise against using ivshmem"
https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026767.html

On 2017/05/08 9:09, Daniel P. Berrange wrote:
On Fri, Apr 28, 2017 at 09:38:38AM +0100, sfinucan@redhat.com wrote:

On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about
DPDK support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd
like to start working on Openstack (ML2 driver) to develop
"networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used
to create "dpdkr" interface in ovs-dpdk[2].

To the best of my knowledge, IVSHMEM ports are no longer supported in
upstream. The documentation for this feature was recently removed from
OVS [1] stating:

  • The ivshmem library has been removed in DPDK since DPDK 16.11.
  • The instructions/scheme provided will not work with current
    supported and future DPDK versions.
  • The linked patch needed to enable support in QEMU has never
    been upstreamed and does not apply to the last 4 QEMU releases.
  • Userspace vhost has become the defacto OVS-DPDK path to the guest.

Note: I worked on DPDK vSwitch [2] way back when, and there were severe
security implications with sharing a chunk of host memory between
multiple guests (which is how IVSHMEM works). I'm not at all surprised
the feature was killed.

Security is only one of the issues. Upstream QEMU maintainers considered
the ivshmem device to have a seriously flawed design and discourage anyone
from using it. For anything network related QEMU maintainers strongly
recommand using vhost-user.

IIUC, there is some experimental work to create a virtio based replacement
for ivshmem, for non-network related vm-2-vm communications, but that is
not going to be something usable for a while yet. This however just
reinforces the point that ivshmem is considered obsolete / flawed
technology by QEMU maintainers.

Regards,
Daniel

--
Tetsuro Nakamura nakamura.tetsuro@lab.ntt.co.jp
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 8, 2017 by TETSURO_NAKAMURA (320 points)   1
0 votes

Hi Nova team,

It has been quite a long time since the last discussion,
but let me make sure one thing about the thread below.

IIUC, Nova is not welcome ivshmem support because it is no longer
supported by DPDK+QEMU.

But how would you say if it is supported out of DPDK-tree and can be
used from the newest qemu version ?

We are now developing SPP, a DPDK-based vswitch, and thinking about
trying to implement ivshmem support under our SPP code tree if nova (or
at first libvirt community) is acceptable for ivshmem configuration.

Your advice will be very helpful for our decision-making in our project.

Thanks in advance,
Tetsuro Nakamura

On 2017/05/08 22:09, Daniel P. Berrange wrote:
On Fri, Apr 28, 2017 at 09:38:38AM +0100, sfinucan@redhat.com wrote:

On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about
DPDK support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd
like to start working on Openstack (ML2 driver) to develop
"networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used
to create "dpdkr" interface in ovs-dpdk[2].

To the best of my knowledge, IVSHMEM ports are no longer supported in
upstream. The documentation for this feature was recently removed from
OVS [1] stating:

  • The ivshmem library has been removed in DPDK since DPDK 16.11.
  • The instructions/scheme provided will not work with current
    supported and future DPDK versions.
  • The linked patch needed to enable support in QEMU has never
    been upstreamed and does not apply to the last 4 QEMU releases.
  • Userspace vhost has become the defacto OVS-DPDK path to the guest.

Note: I worked on DPDK vSwitch [2] way back when, and there were severe
security implications with sharing a chunk of host memory between
multiple guests (which is how IVSHMEM works). I'm not at all surprised
the feature was killed.

Security is only one of the issues. Upstream QEMU maintainers considered
the ivshmem device to have a seriously flawed design and discourage anyone
from using it. For anything network related QEMU maintainers strongly
recommand using vhost-user.

IIUC, there is some experimental work to create a virtio based replacement
for ivshmem, for non-network related vm-2-vm communications, but that is
not going to be something usable for a while yet. This however just
reinforces the point that ivshmem is considered obsolete / flawed
technology by QEMU maintainers.

Regards,
Daniel

--
Tetsuro Nakamura nakamura.tetsuro@lab.ntt.co.jp
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jul 26, 2017 by TETSURO_NAKAMURA (320 points)   1
0 votes

On 07/26/2017 03:06 AM, TETSURO NAKAMURA wrote:
Hi Nova team,

It has been quite a long time since the last discussion,
but let me make sure one thing about the thread below.

IIUC, Nova is not welcome ivshmem support because it is no longer
supported by DPDK+QEMU.

But how would you say if it is supported out of DPDK-tree and can be
used from the newest qemu version ?

We are now developing SPP, a DPDK-based vswitch, and thinking about
trying to implement ivshmem support under our SPP code tree if nova (or
at first libvirt community) is acceptable for ivshmem configuration.

Your advice will be very helpful for our decision-making in our project.

I think this is a question that the libvirt community would first need
to weigh in on since Nova is downstream from libvirt -- at least in the
sense of low-level hypervisor support.

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jul 26, 2017 by Jay_Pipes (59,760 points)   3 10 14
0 votes

-----Original Message-----
From: Jay Pipes [mailto:jaypipes@gmail.com]
Sent: Wednesday, July 26, 2017 2:50 PM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org; TETSURO NAKAMURA
nakamura.tetsuro@lab.ntt.co.jp; Daniel P. Berrange
berrange@redhat.com; sfinucan@redhat.com; mriedemos@gmail.com
Cc: 【社内】【ML】srv-apl-arch srv-apl-arch@lab.ntt.co.jp
Subject: Re: [openstack-dev] [nova] Discussions for ivshmem support in
OpenStack Nova

On 07/26/2017 03:06 AM, TETSURO NAKAMURA wrote:

Hi Nova team,

It has been quite a long time since the last discussion, but let me
make sure one thing about the thread below.

IIUC, Nova is not welcome ivshmem support because it is no longer
supported by DPDK+QEMU.

But how would you say if it is supported out of DPDK-tree and can be
used from the newest qemu version ?

We are now developing SPP, a DPDK-based vswitch, and thinking about
trying to implement ivshmem support under our SPP code tree if nova
(or at first libvirt community) is acceptable for ivshmem
configuration.

Your advice will be very helpful for our decision-making in our
project.

I think this is a question that the libvirt community would first need
to weigh in on since Nova is downstream from libvirt -- at least in the
sense of low-level hypervisor support.
[Mooney, Sean K] well ivshmem was deprecated in dpdk and removed and never supported with hugepages memory instead of posixs
Shared mememy in qemu where it is also deprecated so the first community to approach would be qemu.

I would caution against build new solutions ontop of ivshmem unless you have first measured and demonstrated that vhost-user is not suitable.

Best,
-jay



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jul 26, 2017 by Mooney,_Sean_K (3,580 points)   3 9
0 votes

On 07/26/2017 09:57 AM, Daniel P. Berrange wrote:
On Wed, Jul 26, 2017 at 09:50:23AM -0400, Jay Pipes wrote:

On 07/26/2017 03:06 AM, TETSURO NAKAMURA wrote:

Hi Nova team,

It has been quite a long time since the last discussion,
but let me make sure one thing about the thread below.

IIUC, Nova is not welcome ivshmem support because it is no longer
supported by DPDK+QEMU.

But how would you say if it is supported out of DPDK-tree and can be
used from the newest qemu version ?

We are now developing SPP, a DPDK-based vswitch, and thinking about
trying to implement ivshmem support under our SPP code tree if nova (or
at first libvirt community) is acceptable for ivshmem configuration.

Your advice will be very helpful for our decision-making in our project.

I think this is a question that the libvirt community would first need to
weigh in on since Nova is downstream from libvirt -- at least in the sense
of low-level hypervisor support.

Libvirt already supports ivshmem device config

http://libvirt.org/formatdomain.html#elementsShmem

Sorry, I suppose I should have said QEMU, not libvirt. Daniel, you were
the one that specifically discouraged doing anything on ivshmem to Tetsuro:

http://lists.openstack.org/pipermail/openstack-dev/2017-July/120136.html

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jul 26, 2017 by Jay_Pipes (59,760 points)   3 10 14
0 votes

On 2017/07/27 0:58, Daniel P. Berrange wrote:
On Wed, Jul 26, 2017 at 11:53:06AM -0400, Jay Pipes wrote:

On 07/26/2017 09:57 AM, Daniel P. Berrange wrote:

On Wed, Jul 26, 2017 at 09:50:23AM -0400, Jay Pipes wrote:

On 07/26/2017 03:06 AM, TETSURO NAKAMURA wrote:

Hi Nova team,

It has been quite a long time since the last discussion,
but let me make sure one thing about the thread below.

IIUC, Nova is not welcome ivshmem support because it is no longer
supported by DPDK+QEMU.

But how would you say if it is supported out of DPDK-tree and can be
used from the newest qemu version ?

We are now developing SPP, a DPDK-based vswitch, and thinking about
trying to implement ivshmem support under our SPP code tree if nova (or
at first libvirt community) is acceptable for ivshmem configuration.

Your advice will be very helpful for our decision-making in our project.

I think this is a question that the libvirt community would first need to
weigh in on since Nova is downstream from libvirt -- at least in the sense
of low-level hypervisor support.

Libvirt already supports ivshmem device config

http://libvirt.org/formatdomain.html#elementsShmem

Sorry, I suppose I should have said QEMU, not libvirt. Daniel, you were the
one that specifically discouraged doing anything on ivshmem to Tetsuro:

http://lists.openstack.org/pipermail/openstack-dev/2017-July/120136.html

'ivshmem' was the original device in QEMU and that is indeed still
deprecated.

There are now two replacements 'ivshmem-plain' and 'ivshmem-doorbell'
which can be used instead, which are considered supported by QEMU,
though most people will still recommend using 'vhostuser' instead
if the use of ivshmem is at all network related.

Regards,
Daniel

Thank you very much for the information about current status of ivshmem
in QEMU.
I now understand that 'ivshmem', 'ivshmem-plain' and 'ivshmem-doorbell'
are different solutions, and libvirt already supports the latter two.

  • Mr. Sean Mooney
    Did you mean that you caution against building new solutions ontop of
    'ivshmem' or ontop of 'ivshmem-plain' and 'ivshmem-doorbell' too?

--
Tetsuro Nakamura nakamura.tetsuro@lab.ntt.co.jp
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jul 27, 2017 by TETSURO_NAKAMURA (320 points)   1
...