Minimal OpenStack-Ansible deployment
Deploying OpenStack-Ansible, and following the deployer documentation, will result in a production-ready OpenStack Cloud. A typical deployment requires operators to setup various network interfaces, use different VLAN tags, and create several purpose-built bridges. But what if these typical requirements couldn't be met or what if the goal was to simply setup an OpenStack cloud using OpenStack-Ansible on a single network device, with a single bridge in a much-simplified environment? This post will answer those questions and provide examples of how to setup a simplified deployment without breaking the ability to upgrade, extend, or reconfigure the cloud later in its lifecycle.
The key to a simplified deployment is within the /etc/openstack_deploy/openstack_user_config.yml
. Everything within OSA starts there. It's within this file that we describe the deployment and its the flexibility OSA provides in config that we'll use to create the simplified deployment.
Configuring the Host(s)
The following information augments basic host setup of a typical deployment. From a network perspective, we'll have ONE network device. For the purpose of this post, my ONE network interface will be named eth0 which will be plugged into ONE bridge named br-mgmt.
Here's the network configuration.
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto br-mgmt
iface br-mgmt inet static
### flat veth
pre-up ip link add flat-veth1 type veth peer name flat-veth2 || true # Create veth pair
pre-up ip link set flat-veth1 up # Set the veth UP
pre-up ip link set flat-veth2 up # Set the veth UP
post-down ip link del flat-veth1 || true # Delete veth pair on DOWN
### flat veth
bridge_stp off
bridge_waitport 10
bridge_fd 0
bridge_ports eth0 flat-veth1
offload-sg off
address 172.16.24.53/22
gateway 172.16.24.2
As you can see in the network interface configuration file, I've got eth0, plugged into br-mgmt with a veth-pair hanging off of it so that I can use flat networking should I ever need or want it.
Flat networking for instances is enabled by plugging one end of a veth pair into the bridge and leaving the other end alone. This results in an interface, flat-veth2, which can be used for instance traffic on a flat network.
With the network interface in place, make any other adjustments needed before moving on to configuring OSA.
Refer to the following article on how the host storage has been setup for container provisioning.
OpenStack-Ansible configuration
To make the simplified deployment process possible the /etc/openstack_deploy/openstack_user_config.yml
file will need to be edited so only ONE bridge used for all of the host machines and containers. The following information assumes Openstack-Ansible has been minimally installed.
Using an editor modify the /etc/openstack_deploy/openstack_user_config.yml
file tailoring the sections to meet the needs of the deployment. For the minimal deployment example I'll go over each part of the user configuration file that has been modified to make deployments simpler.
Networking
The first thing to do is minimize the cidr_networks
. In the simplified deployment, there's only ONE bridge which is also the container bridge. For this reason, the user configuration file will have only the single network CIDR. The used_ips
section is still required for any address on the chosen CIDR that should be reserved or is otherwise not available. The global_overrides
section is used to define the bridge(s) used for tunnel type networks, management access to containers, and the provider networks that will be setup within neutron and the containers for instance connectivity. Because there's only ONE bridge the provider_networks
section is relatively simple. Everything is set to the br-mgmt interface and the network type is defined as the default provider network. Two examples have been provided for both VLAN and FLAT network types.
VLAN Provider Networks
---
cidr_networks:
container: "172.16.26.0/24"
used_ips:
- "172.16.26.1,172.16.26.2"
global_overrides:
internal_lb_vip_address: "172.16.26.1"
external_lb_vip_address: "172.16.26.2"
tunnel_bridge: "br-mgmt"
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "vlan"
net_name: "vlan"
range: "10:10"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
FLAT Provider Networks
---
cidr_networks:
container: "172.16.26.0/24"
used_ips:
- "172.16.26.1,172.16.26.2"
global_overrides:
internal_lb_vip_address: "172.16.26.1"
external_lb_vip_address: "172.16.26.2"
tunnel_bridge: "br-mgmt"
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
host_bind_override: "flat-veth2"
type: "flat"
net_name: "flat"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
Swift Deployment
If deploying swift using a simplified deployment two changes will need to be made for the replication_network
and storage_network
options. Typically these are set to a storage and replication network to isolate the traffic however because the simplified deployment will only have one bridge these settings need to be updated to use the correct network interface. The following example is using the basic swift setup from the OSA example configuration with the two aforementioned changes.
global_overrides:
swift:
part_power: 8
storage_network: 'br-mgmt'
replication_network: 'br-mgmt'
drives:
- name: disk1
- name: disk2
- name: disk3
mount_point: /srv
storage_policies:
- policy:
name: default
index: 0
default: True
With the initial configuration resolved for the single network interface and single bridge, refer back to the deployment guide reference for the initial configuration of the OpenStack-Ansible user configuration file.
For a complete example of a simplified deployment (functional and running in a lab) see the following gist.
Running The Deployment
With everything setup within the user configuration files simply run the deployment normally. Everything done within these configuration files is fully supported and will not break the ability to upgrade later.
As the deployment begins building/running containers you'll see only the two network interfaces within the containers; one network tied to lxcbr0 (unmanaged LXC only network) and br-mgmt.
# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4
compute1_aodh_container-f08d2204 RUNNING 1 onboot, openstack 10.0.3.118, 172.16.26.75
compute1_ceilometer_central_container-56318e0e RUNNING 1 onboot, openstack 10.0.3.134, 172.16.26.190
compute1_cinder_api_container-86552654 RUNNING 1 onboot, openstack 10.0.3.34, 172.16.26.86
compute1_cinder_scheduler_container-2c0c6061 RUNNING 1 onboot, openstack 10.0.3.66, 172.16.26.153
compute1_galera_container-53877d98 RUNNING 1 onboot, openstack 10.0.3.137, 172.16.26.208
compute1_glance_container-78b73e1a RUNNING 1 onboot, openstack 10.0.3.199, 172.16.26.241
compute1_gnocchi_container-9a4b182b RUNNING 1 onboot, openstack 10.0.3.225, 172.16.26.219
compute1_heat_apis_container-c973ef5a RUNNING 1 onboot, openstack 10.0.3.253, 172.16.26.236
compute1_heat_engine_container-ae51062c RUNNING 1 onboot, openstack 10.0.3.8, 172.16.26.132
compute1_horizon_container-4148c753 RUNNING 1 onboot, openstack 10.0.3.194, 172.16.26.49
compute1_keystone_container-7a0a3834 RUNNING 1 onboot, openstack 10.0.3.133, 172.16.26.146
compute1_memcached_container-782a6588 RUNNING 1 onboot, openstack 10.0.3.18, 172.16.26.126
compute1_neutron_agents_container-de8a4d37 RUNNING 1 onboot, openstack 10.0.3.150, 172.16.26.220
compute1_neutron_server_container-219f00f7 RUNNING 1 onboot, openstack 10.0.3.87, 172.16.26.57
compute1_nova_api_metadata_container-9a8fe9ae RUNNING 1 onboot, openstack 10.0.3.101, 172.16.26.170
compute1_nova_api_os_compute_container-2a4faa2c RUNNING 1 onboot, openstack 10.0.3.49, 172.16.26.116
compute1_nova_api_placement_container-42904e4c RUNNING 1 onboot, openstack 10.0.3.158, 172.16.26.115
compute1_nova_conductor_container-5109b386 RUNNING 1 onboot, openstack 10.0.3.61, 172.16.26.101
compute1_nova_console_container-cf223830 RUNNING 1 onboot, openstack 10.0.3.139, 172.16.26.123
compute1_nova_scheduler_container-832bf438 RUNNING 1 onboot, openstack 10.0.3.92, 172.16.26.199
compute1_rabbit_mq_container-652f0bda RUNNING 1 onboot, openstack 10.0.3.173, 172.16.26.54
compute1_repo_container-754d214c RUNNING 1 onboot, openstack 10.0.3.185, 172.16.26.80
compute1_swift_proxy_container-fb47a052 RUNNING 1 onboot, openstack 10.0.3.41, 172.16.26.247
compute1_utility_container-cbd7b73e RUNNING 1 onboot, openstack 10.0.3.97, 172.16.26.94
We can also see everything from a bridging perspective is also greatly simplified
# brctl show
bridge name bridge id STP enabled interfaces
br-mgmt 8000.0cc47aab5e70 no 219f00f7_eth1
2a4faa2c_eth1
2c0c6061_eth1
4148c753_eth1
42904e4c_eth1
5109b386_eth1
53877d98_eth1
56318e0e_eth1
652f0bda_eth1
754d214c_eth1
782a6588_eth1
78b73e1a_eth1
7a0a3834_eth1
832bf438_eth1
86552654_eth1
9a4b182b_eth1
9a8fe9ae_eth1
ae51062c_eth1
eth0
c973ef5a_eth1
cbd7b73e_eth1
cf223830_eth1
de8a4d37_eth1
de8a4d37_eth11
f08d2204_eth1
fb47a052_eth1
flat-veth1
lxcbr0 8000.fe12b24b4ff9 no 219f00f7_eth0
2a4faa2c_eth0
2c0c6061_eth0
4148c753_eth0
42904e4c_eth0
5109b386_eth0
53877d98_eth0
56318e0e_eth0
652f0bda_eth0
754d214c_eth0
782a6588_eth0
78b73e1a_eth0
7a0a3834_eth0
832bf438_eth0
86552654_eth0
9a4b182b_eth0
9a8fe9ae_eth0
ae51062c_eth0
c973ef5a_eth0
cbd7b73e_eth0
cf223830_eth0
de8a4d37_eth0
f08d2204_eth0
fb47a052_eth0
Wrap-Up
This post covered some basic setup when running a simplified deployment. I hope this helps answer questions on how something like this might get done and how easy it is to bend OSA to fit the constraints of an environment all without breaking supportability.