settingsLogin | Registersettings

[Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

0 votes

Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a
container-like solution (virtualenvs, lxc, or docker...) and a set of
questions that comes up is the following (and I would think that some
folks on this mailing list may have some useful insight into the answers):

  • Have you done the transition?

  • How did the transition go?

  • Was/is kolla used or looked into? or something custom?

  • How long did it take to do the transition from a package based
    solution (with say puppet/chef being used to deploy these packages)?

    • Follow-up being how big was the team to do this?
  • What was the roll-out strategy to achieve the final container solution?

Any other feedback (and/or questions that I missed)?

Thanks,

Josh


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
asked May 12, 2016 in openstack-operators by harlowja_at_fastmail (16,200 points)   2 7 8

16 Responses

0 votes

Hi.

I am investigating how to help move godaddy from rpms to a container-like solution (virtualenvs, lxc, or docker...) and a set of questions that comes up is the following (and I would think that some folks on this mailing list may have some useful insight into the answers):

I’ve been mulling this over for a while as well, and although we’re not yet there I figured I might as well chip in with my .2p all the same.

  • Have you done the transition?

Not yet!

  • Was/is kolla used or looked into? or something custom?

We’re looking at deploying Docker containers from images that have been created using Puppet. We’d also use Puppet to manage the orchestration, i.e to make sure a given container is running in the right place and using the correct image ID. Containers would comprise discrete OpenStack service ‘composables’, i.e a container on a control node running the core nova services (nova-api, nova-scheduler, nova-compute, and so on), one running neutron-server, one for keystone, etc. Nothing unusual there.

The workflow would be something like:

  1. Developer generates / updates configuration via Puppet and builds a new image;
  2. Image is uploaded into a private Docker image registry. Puppet handles deploying a container from this new image ID;
  3. New container is deployed into a staging environment for testing;
  4. Assuming everything checks out, Puppet again handles deploying an updated container into the production environment on the relevant hosts.

I’m simplifying things a little but essentially that’s how I see this hanging together.

  • What was the roll-out strategy to achieve the final container solution?

We’d do this piecemeal, and so containerise some of the ‘safer’ components first of all (such as Horizon) to make sure this all hangs together. Eventually we’d have all of our core OpenStack services on the control nodes isolated and running in containers, and then work on this approach for the rest of the platform.

Would love to hear from other operators as well as to their experience and conclusions.

-Nick
--
DataCentred Limited registered in England and Wales no. 05611763


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 12, 2016 by Nick_Jones (840 points)   1
0 votes

On 5/12/16, 2:04 PM, "Joshua Harlow" harlowja@fastmail.com wrote:

Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a
container-like solution (virtualenvs, lxc, or docker...) and a set of
questions that comes up is the following (and I would think that some
folks on this mailing list may have some useful insight into the answers):

  • Have you done the transition?

  • How did the transition go?

  • Was/is kolla used or looked into? or something custom?

  • How long did it take to do the transition from a package based
    solution (with say puppet/chef being used to deploy these packages)?

    • Follow-up being how big was the team to do this?

I know I am not an operator, but to respond on this particular point
related to the Kolla question above, I think the team size could be very
small and still effective. You would want 24 hour coverage of your data
center, and a backup individual, which puts the IC list at 4 people. (3 8
hour shifts + 1 backup in case of illness/etc). Expect for these folks to
require other work, as once Kolla is deployed there isn't a whole lot to
do. A 64 node cluster is deployable by one individual in 1-2 hours once
the gear has been racked. Realistically if you plan to deploy Kolla I'd
expect that individual to want to train for 3-6 weeks deploying over and
over to get a feel for the Kolla workflow. Try it, I suspect you will
like it :)

If you had less rigorous constraints around availability then I'd expect
Godaddy to have, a Kolla deployment could likely be managed with as little
as half a person or less. Everything including upgrades is automated.

Regards
-steve

  • What was the roll-out strategy to achieve the final container solution?

Any other feedback (and/or questions that I missed)?

Thanks,

Josh


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by Steven_Dake_(stdake) (24,540 points)   2 13 26
0 votes

On Thu, May 12, 2016 at 5:04 PM, Joshua Harlow harlowja@fastmail.com
wrote:

Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a container-like
solution (virtualenvs, lxc, or docker...) and a set of questions that comes
up is the following (and I would think that some folks on this mailing list
may have some useful insight into the answers):

  • Have you done the transition?

We've done the transition to containers using both RPM's as well as source
code. We started out with just putting the RPM's into the container. We
then moved to building the containers using the source code. There has
been change in direction a bit that is requiring us to go back to RPM's
which was a simple flip over from an RPM.

The biggest thing we had to think about was the configuration files. We
wanted them to be as easy and clean as possible. We didn't want to keep
creating tons of container images for all the different environments. At
the end of the day, we realized that we could use ETCD to allow us to use
environment variables to make configuration changes very easy.

  • How did the transition go?

It was very easy for us to move between RPM's on the host to containers.
We started off with one project and worked through that and proceeded on to
the next. We were easily able to mix and match between RPMs on the host and
new containers. Our automation proved to be very useful to making things
easier (obviously).

  • Was/is kolla used or looked into? or something custom?

We started down this process way before kolla was out there and running, so
it would take a lot for us to move over to kolla as we have a pretty
detailed deployment setup.

  • How long did it take to do the transition from a package based solution
    (with say puppet/chef being used to deploy these packages)?

It took a week or two honestly. It is a lot easier than you think. Just
take your current configuration file, and put it inside the container and
run it and see what happens. That was the easiest way to get started and
see how they act within your environment.

  • Follow-up being how big was the team to do this?

    • What was the roll-out strategy to achieve the final container solution?

We use ansible along with docker-compose to do all our file deployments.
We use it to talk with haproxy to take the service out of rotation, wait
for it to drain, take the container down, load the new container, start it
up, run a few test cases to ensure the container is doing what it should be
doing, and then put it back into rotation via Haproxy.

Any other feedback (and/or questions that I missed)?

One thing we realized is that you have to be using host based networking.
Do not try to run the containers using the docker networking that is built
in. You will get some weird results. We seemed to solve all the weirdness
when we moved everything over to host based networking.

We are beginning to work on doing compute nodes and gateway nodes. Since
those don't change as often as controller functions do, we gained a lot of
efficiency and speed for deployments by moving to containers.

We have started to look at deploying via Kubernetes. We have it working in
our lab for a while now, but we are still trying to get familiar with it
before we start trying to use it in production.

Thanks,

Josh


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by Joseph_Bajin (2,140 points)   4
0 votes

Hi Josh,

sorry for the double reply, I seem to be having ML bounce issues.

comments in-line,

On Thu, May 12, 2016 at 4:04 PM, Joshua Harlow harlowja@fastmail.com wrote:

Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a container-like solution (virtualenvs, lxc, or docker...) and a set of questions that comes up is the following (and I would think that some folks on this mailing list may have some useful insight into the answers):

  • Have you done the transition?
    Back in the Havana timeframe I attempted a package to source
    conversion and it was fugly. I was attempting to convert nodes
    in-place and found that the distro packages we're applying "value
    added" out of tree patches and those patches caused me no end of pain.

  • How did the transition go?
    As mentioned, the transition was painful while cleaning up packages
    leaves all sorts of crufty bits on a host the biggest problem I ran
    into during that time period was related to the DB. Some of the distro
    packages pulled in patches that added DB migrations and those
    migrations made going to the OpenStack source hard. While the
    conversion was a great learning experience in the end I abandoned the
    effort.

  • Was/is kolla used or looked into? or something custom?
    Full disclosure, I work for Rackspace on the OpenStack-Ansible
    project. The OSA project uses both containers (LXC) and python
    virtual-environments. We do both because we want user, file system,
    and process isolation which containers gives us and we want to isolate
    OpenStack from the operating system which has python dependencies. The
    idea is to allow the host operating system to do what it does best and
    keep OpenStack as far away from it as possible. This has some great
    advantages which we've seen in our ability to scale services
    independently of a given hosts while keeping it fully isolated. In the
    end our solution is a hybrid one as we run the on metal for
    Cinder-Volume (when using the reference LVM driver), Nova-Compute
    (when using Linux+KVM), Swift-.* (except proxies).

  • How long did it take to do the transition from a package based solution (with say puppet/chef being used to deploy these packages)?
    I cant remember specifically but I think I worked on the effort for ~2
    weeks before I decided it wasn't worth continuing. If I were to try it
    all again I'd likely have a better time today that I did then but I
    still think it'd be a mess. My basic plan of attack today would be to
    add nodes to the environment using the new infrastructure and slowly
    decommission the old deployment. You'll likely need to identify the
    point in time your distro packages currently are and take stock of all
    of the patches they may have applied. Then with all of that fully
    understood you will need to deploy a version of OpenStack just ahead
    of your release which "should" pulled the environment in line with the
    community.

    • Follow-up being how big was the team to do this?
      It was just me, I was working on a 15+ node lab.
  • What was the roll-out strategy to achieve the final container solution?
    My team at Rackspace created the initial release of what is now known
    as the OpenStack-Ansible project. In our evaluation of container
    technologies found docker to be a cute tool that was incapable of
    doing what we needed while remaining stable/functional so we went with
    LXC. Having run LXC for a few years now, 2 of which has been with
    production workloads, I've been very happy with the results. This is
    our basic reference architecture: [
    http://docs.openstack.org/developer/openstack-ansible/install-guide/overvie=
    w-hostlayout.html
    ]. Regardless of your choices in container technologies your going to
    have to deal with some limitations. Before going full "container all
    the things" I'd look into what your current deployment needs are and
    see if there are known limitations going to cause you major headaches
    (like AFNETLINK not being namespace aware making it impossible to
    mount an iscsi target within a container
    https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855 unless you
    drop the network namespace).

Any other feedback (and/or questions that I missed)?
* Dependency problems are still problems in every container
technology. Having a reliable build environment or a package mirror is
still a must.
* Docker files are not packages no matter how many docker people
tell you they are. However, a docker file is a really nice way to
express a container runtime and if you treat it like a runtime
expression engine and stay within those lines it works rather well.
* OverlayFS has had some problems with various under mounts
(example: https://bugzilla.redhat.com/show_bug.cgi?id=3D1319507). We're
using LVM + EXT4 for the container root devices which I've not seen
reliability issues with.
* BTRFS may not be a good solution either (see the gotchas for more
on that https://btrfs.wiki.kernel.org/index.php/Gotchas)
* ZFS on linux looks really promising and has a long history of
being awesome but I'm reserving full judgment on that until I can have
a good long look at it.
* If you're using containers and dropping all of the namespaces to
work around problems or to make your life easier just run a chroot and
save yourself a lot of frustration.
* Generally a containerized solution will require you to re-imagine
your infrastructure. I know containers are all the hype and everyone
says it's so simple but there's nothing simple about running services
in production that people rely on especially when you consider the
network topology.

If your deployment and operator teams are already accustom to a chef
or puppet ecosystem, I'm assuming you're a chef or puppet shop based
on one of the questions, I'd personally recommend popping into those
communities IRC channels (if you're not already) to see what other
folks are doing. It's likely others are working on similar things and
you may be able to get what you need while helping out other deployers
all without reinventing many wheels.

That's about all I can think of right now and I hope it helps. Best of
luck to you and if you have other questions or just want to chat drop
by the #openstack-ansible channel (even if you don't use
OpenStack-Ansible) I'm always happy to help and there are lots of
other deployers running lots of different things and they may too be
able to give you insight or perspective.

Thanks,

Josh


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--

Kevin Carter
IRC: Cloudnull


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by Carter,_Kevin (580 points)   1
0 votes

On 05/12/2016 04:04 PM, Joshua Harlow wrote:
Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a
container-like solution (virtualenvs, lxc, or docker...) and a set of
questions that comes up is the following (and I would think that some
folks on this mailing list may have some useful insight into the answers):

  • Have you done the transition?

We've been using openstack-ansible since it existed, it's working well
for us.

  • How did the transition go?

It can be painful, but it's worked out in the long run.

  • Was/is kolla used or looked into? or something custom?

Openstack-ansible, which is Openstack big-tent. It used to be
os-ansible-deployment in stackforge, but we've removed the rackspacisms.
I will say that openstack-ansible is one of the few that have been
doing upgrades reliably for a while, since at least Icehouse, maybe further.

  • How long did it take to do the transition from a package based
    solution (with say puppet/chef being used to deploy these packages)?

    • Follow-up being how big was the team to do this?

Our team was somewhat bigger than most as we have many deployments and
we had to do it from scratch. If you CAN do it solo, but I'd recommend
you have coverage / on call for whatever your requirements are.

  • What was the roll-out strategy to achieve the final container solution?

For Openstack-ansible I'd recommend deploying a service at a time,
migrating piecemeal. You can migrate to the same release as you are on
(I hope), though I'd recommend kilo or greater as upgrades can get
annoying after a while.

Any other feedback (and/or questions that I missed)?

Thanks,

Josh


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
-- Matthew Thode (prometheanfire)


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by prometheanfire_at_ge (6,880 points)   1 4 5
0 votes

I'm working with a customer to define and manage a transition to what is
currently anticipated to be a container based solution for OpenStack
services. The focus on containers is to simplify the middleware deployment
of both OpenStack services and other services that are deployed to enable
the overall provider's cloud enviornment.

On Thu, May 12, 2016 at 11:04 AM, Joshua Harlow harlowja@fastmail.com
wrote:

Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a container-like
solution (virtualenvs, lxc, or docker...) and a set of questions that comes
up is the following (and I would think that some folks on this mailing list
may have some useful insight into the answers):

  • Have you done the transition?

Not yet. Still investigating and modeling the transition and deployment of
non-openstack services.

  • How did the transition go?

We hope for it to be very smooth :)

  • Was/is kolla used or looked into? or something custom?

Our principal target is Kolla, along with the Kolla/OSAD model for other
services being deployed alongside the rest of OpenStack. I expect Kolla to
be our final solution, as it also appears to map well into a CI approach we
want to leverage for all future deployments.

  • How long did it take to do the transition from a package based solution
    (with say puppet/chef being used to deploy these packages)?

The expectation is that the actual transition will be automated, but that
we'll be rolling this solution out in ~6 months time.

  • Follow-up being how big was the team to do this?

Leveraging the Kolla team? -> Huge :). We are currently planning on a 3
person team working on this, though not all full time, and we're also
looking a the CI services, and mapping other services into containers.

  • What was the roll-out strategy to achieve the final container solution?

TBD. But we'll be doing greenfield first, and working back into the
browfield active system transition over time. We do have NFS backed
instance storage, so migration of VMs is possible if it becomes necessary
to migrate the live system. The control system is also HA capable so it
should be possible to migrate services into containers and keep the
system online. We'll see how that all maps out in the lab first though.

Any other feedback (and/or questions that I missed)?

I do think that now is the time to do this transition, and am looking
forward to supporting this journey!

Thanks,

Josh


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by Robert_Starmer (1,780 points)   1
0 votes

Steven Dake (stdake) wrote:

On 5/12/16, 2:04 PM, "Joshua Harlow"harlowja@fastmail.com wrote:

Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a
container-like solution (virtualenvs, lxc, or docker...) and a set of
questions that comes up is the following (and I would think that some
folks on this mailing list may have some useful insight into the answers):

  • Have you done the transition?

  • How did the transition go?

  • Was/is kolla used or looked into? or something custom?

  • How long did it take to do the transition from a package based
    solution (with say puppet/chef being used to deploy these packages)?

    • Follow-up being how big was the team to do this?

I know I am not an operator, but to respond on this particular point
related to the Kolla question above, I think the team size could be very
small and still effective. You would want 24 hour coverage of your data
center, and a backup individual, which puts the IC list at 4 people. (3 8
hour shifts + 1 backup in case of illness/etc). Expect for these folks to
require other work, as once Kolla is deployed there isn't a whole lot to
do. A 64 node cluster is deployable by one individual in 1-2 hours once
the gear has been racked. Realistically if you plan to deploy Kolla I'd
expect that individual to want to train for 3-6 weeks deploying over and
over to get a feel for the Kolla workflow. Try it, I suspect you will
like it :)

Thanks for the info and/or estimates, but before I dive to far in I have
a question. I see that the following has links to how the different
services run under kolla:

http://docs.openstack.org/developer/kolla/#kolla-services

But one that seems missing from this list is what I would expect to be
the more complicated one, that being nova-compute (and libvirt and kvm).
Are there any secret docs on that (since I would assume it'd be the most
problematic to get right)?

If you had less rigorous constraints around availability then I'd expect
Godaddy to have, a Kolla deployment could likely be managed with as little
as half a person or less. Everything including upgrades is automated.

Along this line, do people typically plug the following into a local
jenkins system?

http://docs.openstack.org/developer/kolla/quickstart.html#building-container-images

Any docs on how people typically incorporate jenkins into the kolla
workflow (I assume they do?) anywhere?

Regards
-steve

  • What was the roll-out strategy to achieve the final container solution?

Any other feedback (and/or questions that I missed)?

Thanks,

Josh


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by harlowja_at_fastmail (16,200 points)   2 7 8
0 votes

Matthew Thode wrote:
On 05/12/2016 04:04 PM, Joshua Harlow wrote:

Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a
container-like solution (virtualenvs, lxc, or docker...) and a set of
questions that comes up is the following (and I would think that some
folks on this mailing list may have some useful insight into the answers):

  • Have you done the transition?

We've been using openstack-ansible since it existed, it's working well
for us.

  • How did the transition go?

It can be painful, but it's worked out in the long run.

  • Was/is kolla used or looked into? or something custom?

Openstack-ansible, which is Openstack big-tent. It used to be
os-ansible-deployment in stackforge, but we've removed the rackspacisms.
I will say that openstack-ansible is one of the few that have been
doing upgrades reliably for a while, since at least Icehouse, maybe further.

Whats the connection between 'openstack-ansible' and 'kolla', is there
any (or any in progress?)

  • How long did it take to do the transition from a package based
    solution (with say puppet/chef being used to deploy these packages)?

    • Follow-up being how big was the team to do this?

Our team was somewhat bigger than most as we have many deployments and
we had to do it from scratch. If you CAN do it solo, but I'd recommend
you have coverage / on call for whatever your requirements are.

  • What was the roll-out strategy to achieve the final container solution?

For Openstack-ansible I'd recommend deploying a service at a time,
migrating piecemeal. You can migrate to the same release as you are on
(I hope), though I'd recommend kilo or greater as upgrades can get
annoying after a while.

Any other feedback (and/or questions that I missed)?

Thanks,

Josh


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by harlowja_at_fastmail (16,200 points)   2 7 8
0 votes

On 05/13/2016 12:48 PM, Joshua Harlow wrote:

  • Was/is kolla used or looked into? or something custom?

Openstack-ansible, which is Openstack big-tent. It used to be
os-ansible-deployment in stackforge, but we've removed the rackspacisms.
I will say that openstack-ansible is one of the few that have been
doing upgrades reliably for a while, since at least Icehouse, maybe
further.

Whats the connection between 'openstack-ansible' and 'kolla', is there
any (or any in progress?)

The main difference is that openstack-ansible uses more heavy weight
containers from a common base (ubuntu 14.04 currently, 16.04/cent
'soon'), it then builds on top of that, uses python virtualenvs as well.
Kolla on the other hand creates the container images centrally and
ships them around.

The other thing to note is that Kolla has not done a non-greenfield
upgrade as far as I know, I know it's on their roadmap though.

--
-- Matthew Thode (prometheanfire)


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by prometheanfire_at_ge (6,880 points)   1 4 5
0 votes

Matthew Thode wrote:
On 05/13/2016 12:48 PM, Joshua Harlow wrote:

  • Was/is kolla used or looked into? or something custom?

Openstack-ansible, which is Openstack big-tent. It used to be
os-ansible-deployment in stackforge, but we've removed the rackspacisms.
I will say that openstack-ansible is one of the few that have been
doing upgrades reliably for a while, since at least Icehouse, maybe
further.
Whats the connection between 'openstack-ansible' and 'kolla', is there
any (or any in progress?)

The main difference is that openstack-ansible uses more heavy weight
containers from a common base (ubuntu 14.04 currently, 16.04/cent
'soon'), it then builds on top of that, uses python virtualenvs as well.
Kolla on the other hand creates the container images centrally and
ships them around.

So I guess its like the following (correct me if I am wrong):

openstack-ansible


  1. Sets up LXC containers from common base on deployment hosts (ansible
    here to do this)
  2. Installs things into those containers (virtualenvs, packages, git
    repos, other ... more ansible)
  3. Connects all the things together (more more ansible).
  4. Decommissions existing container (if it exists) and replaces with new
    container (more more more ansible).
  5. <>

kolla


  1. Builds up (installing things and such) docker containers outside of
    deployment hosts (say inside jenkins) [not ansible]
  2. Ships built up containers to a docker hub
  3. Ansible then runs commands on deployment hosts to download image from
    docker hub
  4. Connects all the things together (more ansible).
  5. Decommissions existing container (if it exists) and replaces with new
    container (more more ansible).
  6. <>

Yes the above is highly simplistic, but just trying to get a feel for
the different base steps here ;)

The other thing to note is that Kolla has not done a non-greenfield
upgrade as far as I know, I know it's on their roadmap though.


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded May 13, 2016 by harlowja_at_fastmail (16,200 points)   2 7 8
...