Configuring VXLAN in Openstack Neutron

 In Openstack

twin-tunnels-750x320

Networking deployment options available in openstack cloud platform

In my previous article I have elucidated about VXLAN with its pros and cons.In this blog I’m concentrating on how to configure VXLAN in our openstack cloud environment.Before getting in to the configuration we will see about what are all the network topologies provided by openstack neutron.OpenStack provides a rich networking environment.By this we can deploy various networking options such as Flat,VLAN,GRE,VXLAN.Most of the document deliberately focussed on deploying VLAN and GRE networking models.Here,I have come up with the deployment of VXLAN in openstack neutron.

Flat Network is extremely simple topology.It does not support wide isolation of tenants in the network.

In VLAN network,each tenant is isolated to its own VLAN IDs.It is more complex to set up than flat model.

VXLAN allows you to create a logical network for your virtual machines across different networks. More technically speaking, you can create a layer 2 network on top of layer 3. VXLAN does this through encapsulation.VXLAN has additional features over other network deployment models in terms of isolation as well as performance.

Steps to configure VXLAN in network and compute nodes

VXLAN can potentially allow network engineers to migrate virtual machines across long distances and play an important role in a software-defined networking (SDN), an emerging architecture that allows a neutron server to tell network switches where to send packets. In a conventional network, each switch has proprietary software that tells it what to do. In a software-defined network, packet-moving decisions are centralized and network traffic flow can be programmed independently of individual switches and datacenter gear. To implement SDN using VXLAN, administrators can use existing hardware and software, a feature that makes the technology financially attractive.

vxlan-img2

Fig2.VXLAN tunnel path in Openstack Neutron

At Network Node:

Step 1: Authentication of user by openstack keystone service.

Step 2: Using Command Line edit the file,vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,gre,vlan,vxlan //List of network type driver entry points to be loaded from the neutron.ml2.type drivers namespace.
tenant_network_types = vxlan //list of network types to allocate as tenant networks
mechanism_drivers = openvswitch //list of networking mechanism driver entry points to be loaded from the neutron.ml2.mechanism drivers namespace.

[ml2_type_flat]

flat_networks = external //List of physical network names with which flat networks can be created. Use external to allow flat networks with external physical network names.

[ml2_type_vxlan]

vni_ranges = 1:2000 //range of VXLAN VNI IDs that are available for tenant network allocation
vxlan_group = 239.1.1.1 //enables VXLAN multicast mode

[securitygroup]

enable_security_group = True
enable_ipset = True
firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]

local_ip = 172.0.0.177
enable_tunneling = True
bridge_mappings = external:br-ex //list of available physical network mapped to created bridge
vxlan_udp_port=4789
tunnel_type=vxlan //type of the tunneling mechanism
tunnel_id_ranges=1:2000 //ranges of VXLAN VNI IDs that are available for tenant network allocation
tenant_network_type=vxlan

[agent]

tunnel_types = vxlan
polling_interval=2

At Compute Node:

Step 1: Authentication of user by openstack keystone service.

Step 2: Using Command Line edit the file,vi /etc/neutron/plugins/ml2/ml2_conf.ini
The config parameters are as same as in the network node.But the only difference is that,at ovs section don’t add the line “bridge_mappings”.Because only network node is connected to the external network bridge.The remaining things are same as network node.

[ml2]

type_drivers = flat,gre,vlan,vxlan //List of network type driver entry points to be loaded from the neutron.ml2.type drivers namespace.
tenant_network_types = vxlan //list of network types to allocate as tenant networks
mechanism_drivers = openvswitch //list of networking mechanism driver entry points to be loaded from the neutron.ml2.mechanism drivers namespace.

[ml2_type_flat]

flat_networks = external //List of physical network names with which flat networks can be created. Use external to allow flat networks with external physical network names.

[ml2_type_vxlan]

vni_ranges = 1:2000 //range of VXLAN VNI IDs that are available for tenant network allocation
vxlan_group = 239.1.1.1 //enables VXLAN multicast mode

[securitygroup]

enable_security_group = True
enable_ipset = True
firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]

local_ip = 172.0.0.177
enable_tunneling = True
vxlan_udp_port=4789
tunnel_type=vxlan //type of the tunneling mechanism
tunnel_id_ranges=1:2000 //ranges of VXLAN VNI IDs that are available for tenant network allocation
tenant_network_type=vxlan

[agent]

tunnel_types = vxlan
polling_interval=2

To verify the working of VXLAN in our openstack tenant network:

#source admin-openrc.sh
#neutron net-list
vxlan-img3

#neutron net-show
vxlan-img4

#neutron net-show –field provider:network_type –field provider:segmentation_id
vxlan-img5

#ovs-vsctl show | grep vxlan
vxlan-img6

Hope you found this post informative..
Thank you..

Recent Posts
Comments

Leave a Comment

Start typing and press Enter to search