settingsLogin | Registersettings

[Openstack] [Openstack-Ansible] Deploy errors on small setup

0 votes

Hello!

I am trying to set up an OpenStack cluster but I am not having much
success. I have been fighting it for several weeks thus my decision
to join this list.

I have 4 hosts all running CentOS 7:

infra1 (Celeron J1900 CPU [4core, 2Ghz], 8GB RAM, 120GB SSD)
compute1 (Core i7 CPU [4core, 4Ghz], 16GB RAM, 120GB SSD)
log1 (AMD Athlon CPU [2core, 2Ghz], 3GB RAM, 120GB HDD)
storage1 (Xeon E3 CPU [4core, 2Ghz], 8GB RAM, 8TB RAID10)

Considering the none-too-powerful specs of infra1 and log1 I have
set the following services to run on metal:

aodhcontainer
ceilometer
centralcontainer
cinder
apicontainer
cinder
schedulercontainer
galera
container
glancecontainer
gnocchi
container
heatapiscontainer
heatenginecontainer
horizoncontainer
keystone
container
memcachedcontainer
neutron
agentscontainer
neutron
servercontainer
nova
apimetadatacontainer
novaapioscomputecontainer
novaapiplacementcontainer
nova
conductorcontainer
nova
consolecontainer
nova
schedulercontainer
rabbit
mqcontainer
repo
container
rsyslog_container

When I run setup_hosts.yml I get errors on infra1. The specific
errors vary each time but it generally seems to fail with container
creation.

8<---8<---8<---8<---

TASK [lxccontainercreate : LXC autodev setup]


Thursday 19 October 2017 17:00:28 -0700 (0:01:36.565) 0:57:32.820

An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1aodhcontainer-3ef9fdf1]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1utilitycontainer-7578d165]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1horizoncontainer-4056733b]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
ok: [infra1novaschedulercontainer-752fb34b -> 172.29.236.11]
ok: [infra1
keystonecontainer-23bb4cba -> 172.29.236.11]
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1
glancecontainer-299bd597]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
changed: [infra1
neutronagentscontainer-e319526f -> 172.29.236.11]
changed: [infra1cinderschedulercontainer-84442b11 -> 172.29.236.11]
changed: [infra1
neutronservercontainer-d19ab320 -> 172.29.236.11]
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1repocontainer-9b73f4cd]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
ok: [infra1cinderapicontainer-05fbf13a -> 172.29.236.11]
ok: [infra1
novaapioscomputecontainer-99a9a1e0 -> 172.29.236.11]
ok: [infra1novaapimetadatacontainer-0a10aa4a -> 172.29.236.11]
ok: [infra1galeracontainer-a3be12a1 -> 172.29.236.11]
ok: [infra1novaconductorcontainer-d8c2040f -> 172.29.236.11]
ok: [infra1
novaconsolecontainer-e4a8d3ae -> 172.29.236.11]
ok: [infra1gnocchicontainer-e83732f5 -> 172.29.236.11]
ok: [infra1rabbitmqcontainer-4c8a4541 -> 172.29.236.11]
ok: [infra1
ceilometercentralcontainer-fe8f973b -> 172.29.236.11]
ok: [infra1memcachedcontainer-895a7ccf -> 172.29.236.11]
ok: [infra1novaapiplacementcontainer-ec10eadb -> 172.29.236.11]
ok: [infra1heatapiscontainer-7579f33e -> 172.29.236.11]
ok: [infra1
heatenginecontainer-2a26e880 -> 172.29.236.11]

8<---8<---8<---8<---

In case it is useful here is my user config:


cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22

used_ips:
- 172.29.236.1
- "172.29.236.100,172.29.236.200"
- "172.29.240.100,172.29.240.200"
- "172.29.244.100,172.29.244.200"

globaloverrides:
internal
lbvipaddress: 172.29.236.9
#
# The below domain name must resolve to an IP address
# in the CIDR specified in haproxykeepalivedexternalvipcidr.
# If using different protocols (https/http) for the public/internal
# endpoints the two addresses must be different.
#
externallbvipaddress: 172.29.236.10
tunnel
bridge: "br-vxlan"
managementbridge: "br-mgmt"
provider
networks:
- network:
containerbridge: "br-mgmt"
container
type: "veth"
containerinterface: "eth1"
ip
fromq: "container"
type: "raw"
group
binds:
- allcontainers
- hosts
is
containeraddress: true
is
sshaddress: true
- network:
container
bridge: "br-vxlan"
containertype: "veth"
container
interface: "eth10"
ipfromq: "tunnel"
type: "vxlan"
range: "1:1000"
netname: "vxlan"
group
binds:
- neutronlinuxbridgeagent
- network:
containerbridge: "br-vlan"
container
type: "veth"
containerinterface: "eth12"
host
bindover: "br-vlan"
type: "flat"
net
name: "flat"
groupbinds:
- neutron
linuxbridgeagent
- network:
container
bridge: "br-vlan"
containertype: "veth"
container
interface: "eth11"
type: "vlan"
range: "1:1"
netname: "vlan"
group
binds:
- neutronlinuxbridgeagent
- network:
containerbridge: "br-storage"
container
type: "veth"
containerinterface: "eth2"
ip
fromq: "storage"
type: "raw"
group
binds:
- glanceapi
- cinder
api
- cindervolume
- nova
compute

#

Infrastructure

#

galera, memcache, rabbitmq, utility

shared-infra_hosts:
infra1:
ip: 172.29.236.11

repository (apt cache, python packages, etc)

repo-infra_hosts:
infra1:
ip: 172.29.236.11

load balancer

Ideally the load balancer should not use the Infrastructure hosts.

Dedicated hardware is best for improved performance and security.

haproxy_hosts:
infra1:
ip: 172.29.236.11

rsyslog server

log_hosts:
log1:
ip: 172.29.236.14

#

OpenStack

#

keystone

identity_hosts:
infra1:
ip: 172.29.236.11

cinder api services

storage-infra_hosts:
infra1:
ip: 172.29.236.11

glance

The settings here are repeated for each infra host.

They could instead be applied as global settings in

user_variables, but are left here to illustrate that

each container could have different storage targets.

imagehosts:
infra1:
ip: 172.29.236.11
container
vars:
limitcontainertypes: glance
glancenfsclient:
- server: "172.29.244.15"
remotepath: "/images"
local
path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"

nova api, conductor, etc services

compute-infra_hosts:
infra1:
ip: 172.29.236.11

heat

orchestration_hosts:
infra1:
ip: 172.29.236.11

horizon

dashboard_hosts:
infra1:
ip: 172.29.236.11

neutron server, agents (L3, etc)

network_hosts:
infra1:
ip: 172.29.236.11

ceilometer (telemetry data collection)

metering-infra_hosts:
infra1:
ip: 172.29.236.11

aodh (telemetry alarm service)

metering-alarm_hosts:
infra1:
ip: 172.29.236.11

gnocchi (telemetry metrics storage)

metrics_hosts:
infra1:
ip: 172.29.236.11

nova hypervisors

compute_hosts:
compute1:
ip: 172.29.236.12

ceilometer compute agent (telemetry data collection)

metering-compute_hosts:
compute1:
ip: 172.29.236.12


Thanks for any ideas!

FV


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
asked Oct 23, 2017 in openstack by fv_at_spots.school (120 points)   1 1
...