PowerCLI get list of vMotion IP’s in use

A good friend of mine who’s powershell foo is much stronger than mine shared this snip of powershell.  It can be used to collect the IP addresses of your vMotion interfaces.  It’s pretty much run and go.

 

$Array = @()

$Clusters = Get-Cluster | Sort Name

ForEach ($Cluster in $Clusters){

$VmHosts = $Cluster | Get-VmHost | Where {$_.ConnectionState -eq “Connected”} | Sort Name

ForEach ($VmHost in $VmHosts){

$Array += Get-VMHostNetworkAdapter -VMHost $VmHost.Name -VMKernel | Where {$_.VMotionEnabled -eq “True”} | select VmHost,IP

}

}

$Array | Out-GridView

 

That’s pretty much it.  You can adjust the Out-GridView to something else if you want Export-CSV.

Managing your family cloud in a industrialized world

Managing your family farm in an industrialized world

I have spent a good portion of the last year attempting to work with customers to convert their infrastructure from each application being unique to a streamlined automated power house.  There is a dream that every shop has to become like google or Netflix.  To run an automated, agile well monitored environment.   Every company is racing to provide products that facilitate their new cloud era management.

 

Game Changers

Every so often there is a product that innovates the so well that is changes the standards.   When I was in college being a systems administrator was racking and stacking servers, large storage arrays and huge buildings with cooling units.   X86 virtualization changed this game, a large part thanks for VMware innovation.    The very fabric of server compute architecture was changed when the first vMotion was completed.  VMware virtualization was a game changer.   When I look for the game changer to make all my customers’ requirements easy to deploy within five cookie cutter models I cannot find it.

 

The American Way

The simple answer is my customers are American’s they want it customized every time.   They each control their own budgets without any central control.   Much like American government each business unit cannot agree on a color let alone a web development platform.    The technical teams have always been a second partner in the discussion because they don’t control the budget.   There has been many great articles about making IT a business.  I won’t go into that side subject because I believe it’s only part of the problem.   The simple problem is even if we control the budget we would still have customers who want it their way.

 

Why can’t you be like Amazon or [..insert cloud provider name here].

This is thrown around a lot.  We are compared to Amazon when we fail to have agility.   It’s a valid statement if the company is really willing to buy from Amazon.   I argue that if you want IaaS like Amazon provides most of IT could provide that really easy… but the business unit wants managed IT.   They want that website now and don’t care about the management issues, but when the site fails it’s your fault.   When the business buys a site on Amazon and it fails it’s the businesses fault.   Your answer to this question really should be it’s a different business model.

 

Enough ranting what is the solution

To avoid making this post 100% rant I would like to share some of the things I have used to manage the family farm.   If you give up the idea that today you can have Amazon (at least until technology game changes come along – Docker or something else like CoreOS) here are my suggestions for managing the farm:

  • Monitoring should be automatic – You need a monitoring solution that includes discovery of new assets and monitoring. It should be able to discover services running and monitor them with minimal customer interaction.   Monitoring should also include historical and individualized thresholds.   If possible monitoring should automatically open up tickets to be worked and resolved. (for VMware virtual vRa and VIN does a decent jobs of this – with some limits)
  • Configuration Management – This is the key component that is missing in so many shops. You need to manage your life cycle with configuration management.  You should use configuration management to spawn and configure new servers.  It should ensure that these servers are in compliance and remediate servers that are out of compliance.  This type of configuration management reduces troubleshooting and allows you to manage at least a portion of your infrastructure as a single entity.  (There are lots of products I have the most experience with puppet and it works very well)
  • Central Log location – Putting all your logs, system, firewall, network, vCenter etc… into a single location allows you to alert across your infrastructure and do discovery in a single pane of glass.
  • Documentation – The whole IT industry is really bad at documentation. They have a major epic fail.   I cannot count the number of times I have googled for an issue I am facing to find the solution on my own blog or a forum posting I made.   We all forget, find a location for documentation that is searchable and share it with the whole team.   This will really cut down on repeated wasted time and get your into a thoughtful method of practice.  This documentation should include at the most basic level what a server / switch / whatever does and common commands or issues.
  • Change Management – The dreaded term… IT shops hate it. Best case scenario changes should be recorded automatically but we do need a single location to locate and answer the grant question what has changed…  Some people use wiki’s, some use text files, some use configuration management that logs change tickets.  It does not matter you need to have some change management.
  • Identity Management – Until you have a single Meta directory for all authentication and authorization any efforts to be agile will fail.

 

As you can see the field is ripe and ready for vendors to harvest products.  While I am waiting for the next game changing technology I am sure I will have more than a few family farms to manage.   These products can help.   If you want to make the journey off the family farm into a mega business you might want to consider these steps:

 

  • Virtualize everything – removing your dependency on specific hardware or storage vendors
  • Consolidate all your services to a common management platform (monitoring, logging, change, hardware, virtualization, etc..)
  • Consolidate your operating systems to as few as possible
  • Choose a database platform and force all new development into that platform
  • Choose an application development platform and force all new development into that platform
  • Wait five to ten years to force all applications into the common platform and hope that management has the strength of will to make it that long

I hope my crazy nodes have helped you on your family farm projects.   Please add additional suggestions.

Deep Dive: Configuration Maximums for dVS

Recently I have been thinking about configuration maximums of the current virtual distributed switches.   In the configuration maximum’s document for 5.5 it states the following:

– Total virtual network switch ports per host (VDS and VSS ports) – 4096
– Maximum active ports per host (VDS and VSS) – 1016
– Hosts per distributed switch – 1000
– Static/Dynamic port groups per distributed switch – 6500
– Ephemeral port groups per distributed switch – 1016
– Distributed virtual network switch ports per vCenter – 60000

The first question is between these numbers:

– Total virtual network switch ports per host (VDS and VSS ports) – 4096
– Maximum active ports per host (VDS and VSS) – 1016

In order to explain these numbers you must have some context about how a vDS and VSS work and allocate ports:

  • virtual standard switch (VSS)- allocates ports statically when a port group is created on the local ESXi host – so if you allocate 24 ports to a port group then 24 ports are taken.
  • virtual distributed switch (dVS) – allocates ports to the port group in vCenter but each individual ESXi host only allocates ports based upon currently powered on machines (assuming Dynamic or static port binding).  so if you create a dVS port group with 24 ports but there is only one virtual machine in the port group it would only take one port on it’s assigned ESXi host.

Ephemeral ports on a dVS work just like a VSS, so each local ESXi host uses all ports in a port group.

 

What is a proxy switch?

Proxy or Ghost switch is a term that you may see around to reference the local copy of the vDS on each host.  The proxy switch only contains relavant information to its virtual machines.   When you vMotion a new virtual machine to the host, vCenter allocates a new port on the ESXi host and sync’s a new proxy configuration to that switch alone.

What is the difference between an active port and total ports?

An active port is defined different between the switches

  • VSS any port on a port group is considered active on each ESXi host
  • dVS static or dynamic port in use on the ESXi host
  • dVS Ephemeral any ports on the port group are allocated on all ESXi hosts.

 

So in order to hit the 4096 total ports you would need a combination of VSS and dVS ports.    When using a single dVS you will hit the 1016 total active ports and never hit the 4096 total ports.

Lets look at some dVS switch maximums:

– Static/Dynamic port groups per distributed switch – 6500
– Ephemeral port groups per distributed switch – 1016

These are software limits static and dynamic are enforced by the dVS at vCenter and have no relationship to the ESXi hosts.   Ephemeral port groups have the hard limit of 1016 which aligns with the maximum number of active ports, which assumes you have 1016 port groups each with a single port.

How about the last set of numbers:

– Hosts per distributed switch – 1000
– Distributed virtual network switch ports per vCenter – 60000

Not much to say here.  The 60,000 creates a boundary that may require you not to allocate 1,000 ports per port group, it is per vCenter not dVS.  So that limit can span multiple vDS’s.

Best practices and design considerations:

Given that only active ports take memory on a ESXi host there is no reason not to allocate larger port groups, then again since port groups can be grown dynamically there is no reason not to keep them small.  I vote for something in between.  It would provide the best manageability without getting close to the maximums.

VMware NSX how to firewall between IP’s and issues

The first thing everyone does with NSX is try to create firewall rules between IP addresses.  I consider this a mistake because the DFW can key off a lot better markers than IP addresses.   Either way at some point you will want to use IP addresses in your rules.  This post will describe how to setup firewall rules between IP addresses.

 

Setup:

I have two Linux machines each on their own subnet:

Linux1 – 172.16.1.10 – 172.16.1.0/24 network

Linux3 – 172.16.10.10 – 172.16.10.0/24 network

Routing is setup between the hosts so they can connect to each other.  I would like to block all traffic except ssh between these subnets.   We are going to assume that both of these networks exist in NSX.

NSX Setup:

First we have to set up an IP set in NSX Manager.  This is suprisingly a set of IP addresses.

  • Login to the vSphere web client
  • Click networking and security
  • Select your NSX Manager and expand it
  • Select Manage -> grouping objects
  • On the lower pane select IP Sets
  • Press the green plus button to add a new set
  • Setup each set as shown below:

Capture

Capture

Tale of multiple cities:

Here is where NSX gets interesting you have multiple ways to block access.  First a little understanding of firewall constructs in NSX:

  • Security Groups – these are groups of machines / constructs they can include IP sets, MAC sets, dynamic name based wild card information.  They can contain whole datacenters or a single virtual machine.  It can be very dynamic with boolean conditions.
  • Security Policies – These are groups of firewall rules and introspection services.  These are policies that are applied to security groups.  Each of the firewall policies assume that they are assigned to one or more security groups.   So your source or destination needs to be the policies assigned security group.  The opposite side (source/destination) needs to either be a security group or any.

Remember we want the following rules:

  • SSH between 172.16.1.0/24 and 172.16.10.0/24 should be allowed bi-directional
  • Everything else between them should be blocked

Within these constructs there are a number of possible options for the firewalls:

  • Option 1 – rules in this order
    • Firewall rule allowing ssh between source: assigned policy group and destination: 172.16.10.0/24
    • Firewall rule allowing ssh between source: 172.16.10.0/24 and destination: assigned policy group
    • Firewall rule blocking any between source: assigned policy group and destination: 172.16.10.0/24
    • Firewall rule blocking any between source: 172.16.10.0/24 and destination: assigned policy group
    • Assign the security policy to 172.16.1.0/24
  • Option 2 – Security Groups
    • Firewall rule allowing ssh between source: assigned policy group and destination: assigned policy group
    • Firewall rule blocking any between source: assigned policy group and destination: assigned policy group
    • Assign the security policy to 172.16.1.0/24 and 172.16.10.0/24
  • Option 3 – Two rules
    • Rule 1
    • Firewall rule allowing ssh between source: Assigned Policy group and destination: 172.16.10.0/24
    • Firewall rule blocking any between source: Assigned Policy group and destination: 172.16.10.0/24
    • Assign Policy to 172.16.1.0/24
    • Rule 2
    • Firewall rule allowing ssh between source: Assigned Policy group and destination: 172.16.1.0/24
    • Firewall rule blocking any between source: Assigned Policy group and destination: 172.16.1.0/24
    • Assign Policy to 172.16.10.0/24

First question anyone will ask is why would I not use option 2?  It’s smaller and easier to read.  It does accomplish the same goal.   It does lack granularity in design.  What if you had a third subnet 172.16.20.0/24 and you only wanted it to access 172.16.1.0/24.  Option 1 would easily be able to do this, while option 2 would mistakenly open up 172.16.10.0/24.   This is the heart of firewall design.  Layer rules to create granularity.    I am not a master of the firewall but I do have a few suggestions:

  • Outbound firewall rules sound great but right away will kill you in complexity
  • Protect the end points… apply rules to the destination (think apply rules to the web server instead of every PC)  If you need to apply source rules do it on the destination
  • Use naming conventions that describe the purpose of the rule  Allow-SSH-Into-Production
  • Consider using a DROP all on your default rule and then applying only allow rules in security groups
  • Rules that are part of the default and not created in service composer don’t show up in the GUI so don’t use them beyond the default DROP apply everything as a security policy

 

Let’s do Option 1

  • Return to networking and security and select service composer
  • Select security groups and create a security group for each IP Set

Capture

Capture

  • Repeat for the other subnet
  • Click on security policies
  • Create a new policy as shown belowCapture

Capture

Capture

Capture

Capture

Capture

  • Now that you have it build your just need to apply it to a security group
  • Click on the text of your Security Policy
  • Select Manage -> Security Groups
  • Click edit and add 172.16.1.0/24

Now your rules should work.  You can test with ping and SSH.   Using the same dialog’s you can create option 2 or 3.   The same rules you use for firewalls on physical entities need to apply to DFW.   You need to think before you create or you will be in firewall spawl.

Deep Dive: How does NSX Distributed Firewall work

This is a continuation of my posts on NSX features you can find other posts on the Deep Dive page.   My favorite feature of VMware NSX is the Distributed firewall.   It provides some long over due security features.  At one time I worked in an environment where we wanted to ensure that every type of traffic was filtered with a firewall.   This was an attempt to increase security.  We wanted to ensure that there was no east <-> west traffic between hosts; so everyone was in its own subnet.  Each virtual machine was deployed inside a /27 subnet alone.   Every communication required a trip to the firewall which was also serving as a router.

LunchThis model worked but made us very firewall centric.  Everything required multiple firewall changes.  Basic provisioning took weeks because of the constant need for more firewall changes.   In addition we wanted secondary controls so each host ran their own host based firewall as well.   This model caused a few major design constraints: you had to buy larger firewalls to handle all the routing and you had to take your firewall guys to lunch all the time to avoid mega rage.

Enter the distributed firewall

The distributed firewall applies firewall rules at the virtual machine kernel and network interface right above the guest OS.  This has a few advantages:

  • No one on the OS can change firewall rules
  • Only traffic that should be on the network is on the network everything else gets blocked before leaving the virtual machine (Think mega cost savings, and less garbage traffic)
  • You can inspect each packet before it gets to the network and take action (lots of third-party plugins will be able to do this)
  • You can scale out your firewalls capacity by adding more hosts in a modular fashion that matched your server growth

The firewall has a api for third-party solutions like virus scanners or IDS.   This allows them to be part of the data stream in real-time.

Components of Distributed firewall (DFW)

The DFW has a management plane, control plane and data plane which should be familiar to network admins.

  • Management Plane – is Done via vCenter plugin or API access to the NSX manager – This allows you to use any vCenter object as the source or destination (Datacenter, VM name, vNic etc..) It also allows you to define IP ranges for more traditional firewalls between IP’s
  • Control Plane – is done by the NSX manager it takes changes from vCenter and stores them in a central database and then pushes the rules down to each ESXi host.  (Database is /etc/vmware/vsfwd/vsipfw_ruleset.dat on each ESXi host)
  • Data Plane – ESXi hosts are the data plane doing the actual work of the firewall.  All firewall functions take place in kernel modules on the ESXi hosts.  Remember that enforcement is done locally and at the destination reducing the traffic on the wire.

Each vNIC get its own instance of DFW put into place and managed by a set of daemons called vsfwd.

How does it work?

Each firewall rule is created and applied via the NSX manager GUI or API.   When published it pushes all rules down to each ESXi host.  They create a file on disk which holds the all the firewall rules.   The ESXi host applies rules to the instance of DFW when a change in vCenter (remember management plane – like a new vNic vlan change etc..) happens the firewall rules are re-consulted.  IP-based rules require VMware tools to identify the IP address / addresses of the server.

How about vMotion?

Since the rules are applied to the virtual container they are moved with the host when vMotion is used, no effect.

How about HA events?

Rules are loaded off disk and applied to virtual machines.

What about if NSX Manager is not available?

Rules are loaded off disk. New systems will get the rule set that apply to them, for example if my new server is called Web-Machine12 and I have rules that are applied to all vm’s named Web-* then it will get them from disk.  This entourages the use of naming standards.

How about if I create a new virtual machines and it does not have any rules?

At the bottom is a default rule (some vote for allow all other deny all, I vote deny all) so you machine will have deny all.

Group and Policies

DFW has the concept of Security Groups (yep like it sounds) groups of similar systems, these can be hard-coded to specific entities or dynamic using regular expresses on any vCenter entity.   They also have security policies these are groups of like-minded rules to be processes in order.   So you define the scope of the rules in the Security Groups and define what is done in Security policies.  It can be a one to many reference on both sides.  (A security group can have many policies or a policy can have may groups) providing the ability to layer rules.

How do I track my firewall drops / accepts?

This is the first thing your firewall guys are going to ask for…  And I don’t like the answer right now.  They are logged to the ESXi hosts syslog.   So you need to centralize your host logs and do some searches to gather the firewalls into one place.   If you search your host based logs for “vsip_pkt” (In 6.1 they changed this to dfwpktlogs:) you will find the firewall drops / accepts.

 

Deep Dive: How does NSX Distributed routing work

As a continuation of my previous host How does NSX virtual switch work I am now writing about how the routing works.   I should thank Ron Flax and Elver Sena for walking through this process with me.   Ron is learning about what I mean by knowledge transfer and being very patient with it.   This post will detail how routing works in NSX.  The more I learn about NSX the more it makes sense.  It is really a great product.  My post are 100% about VMware NSX not multi-hypervisor NSX.

How does routing work anyways

Must like my last post I think it’s important to understand how routing in a physical environment works.   Simply put when you want to go from one subnet to another subnet (layer 2 segment) you have to use a router.   The router receives a packet and has two choices (some routers have more but this is generic):

  • I know that destination network lives down this path and send it
  • I don’t know that destination network and forward it out my default gateway

IP packets continue to forward upward until a router knows how to deliver the IP packet.  It all sounds pretty simple.

Standardized Exterior Gateway protocols

It would be simple if someone placed a IP subnet at a location and never changed it.  We could all learn a static route to the location and never have it change.   Think of the internet like a freeway.  Every so often we need to do road construction this may cause your journey to take longer using an alternate route but you will still get there.  I am afraid that the world of IT is constant change.  So protocols were created to dynamically update router of these changes standardized exterior gateway protocols were born. (BGP, OSPF etc..) I will not go into these protocols because I have a limited understanding and because it’s not relevant for the topic today (It will be relevant later).   It’s important to understand that routes can change and there is an orderly way of updating (think DNS for routing.. sorta).

IP

A key component of routing is the internet protocol.  This is a unique address that we all use every single day.   There are public ip address and internal IP addresses.  NSX can use either and has solutions for bridging both.  This article will use two subnets 192.168.10.0/24 and 192.168.20.0/24.   The /24 after the IP address denotes the size of the range in cidr notation.  For this article it’s enough to denote that these ranges are on different layer 2 segments and normally cannot talk to each other without a router.

Setup

We are going to place these two networks on different VXLAN backed network interfaces (VNI’s) as shown below:

Lunch

If you are struggling with the term VNI just replace with VLAN and it’s about the same thing.  (Read more about the differences in my last post).   In this diagram we see that each virtual machine is connected to its own layer 2 segment and will not be able to talk to each other or anything else.  We could deploy another host into VNI 5000 with the address 192.168.10.11 and they would be able to talk using NSX switching but no crossing from VNI 5000 to VNI 5001 will be allowed.  In a physical environment a router would be required to allow this communication, in NSX this is also true:

Lunch

Both of the networks shown would set their default gateway to be the router.  Notice the use of distributed router.  In a physical environment this would be a single router or a cluster.  NSX uses a distributed router.  It’s capability scales up as you scale up your environment, each time you add a server you get more routing capacity.

Where does the distributed router live?

This was a challenge for me when I first started working with NSX, I thought everything was a virtual machine.   The NSX vSwitch is really just a code extension of the dVS.  This is also true of the router.  the hypervisor kernel does the router with mininal physical overhead.  This provides an optimal path for data, if the data is on the same machine communication never leaves the machine (much like switching in normal vss).  The data plane for the router lives on the dVS.    There are a number of components to consider:

  • Distributed router – code that lives on each ESXi host as part of the dVS that handles routing.
  • NSX Routing Control VM – this virtual machine that controls aspects of routing (such as BGP peering)  it is in the control plane not data plane (in order words it is not required to do routing) (Design Tip: You can make it highly available by clicking the HA button at anytime, this will create another vm with a anti-affinity rule)
  • NSX Control cluster – This is the control cluster mentioned in my last post.   It syncs configuration between the management and control plane elements to the data plane.

How does NSX routing work?

Here is the really neat part.  A routers job to deliver IP packets.  It is not concerned if the packets should be delivered it just fings IP’s to their destination.   So let’s go through a basic routing situation in NSX.  Assume that Windows virtual machine wants to talk to Linux virtual machine.

Lunch

 

The process is like this:

  1. The L3 Local router becomes aware of each virtual machine as it talks out and updates the control cluster with arp entry including VNI and ESXi Node
  2. The control cluster updates all members of the same transport zone so everyone knows the arp entries
  3. Windows virtual machine wants to visit the website on Linux so it arps
  4. ESXi1’s DLR (Distributed Logical Router) returns its own mac address
  5. Windows sends a packet to ESXi1’s LDR
  6. Local LDR knows that Linux is on VNI 5001 so it routes the packet to the local VNI 5001 on ESXi1
  7. Switch on ESXi1 knows that Linux lives on ESXi2 so it sends the packet to VTEP1
  8. VTEP1 sends the packet to VTEP2
  9. VTEP2 drops the packet into VNI 5001 and Linux gets the message

It really makes sense if you think about it.  It works just like any router or switch you have mostly ever used.  You just have to get used to the distributed nature.   The greatest strength of NSX is the ability to handle everything locally.   If Linux was on the same ESXi host then the packet would never leave ESXi1 to get to Linux.

What is the MAC address and IP address of the DLR?

Here is where the fun begins.   It is the same on each host:

Lunch

Yep it’s not a typo each router is seen as the default gateway for each VNI.  Since the layer 2 networking is done over VXLAN (via VTEP) each local router can have the same IP address and mac address.  The kernel code knows to route it locally and it all works.   This does present one problem: External access.

External Access

In order for your network to be accessible via external networks the DLR has to present the default gateway outside, but if each instance has the same IP / Mac who responds to requests to route traffic?   Once instance gets elected as the designated instance (DI) and answers all questions.   If a message needs to be sent to another ESXi host than the one running the DI then it routes like above.   It’s a simple but great process that works.

Network Isolation

What if your designated instance becomes isolated?  There is an internal heartbeat that if not responded to will cause a new DI election to happen.   What about if networking fails on my ESXi host?  Well then every other instance will continue to communicate with everyone else, packets destined for the failed host will fail.

Failure of the control cluster

What about if the control cluster fails?   Well since all the routing is distributed and held locally everything will continue to operate.  Any new elements in the virtual world may fail but everything else will be good.  It’s a good idea to ensure that you have enough control clusters and redundancy as they are a critical component of the control plane.

Deep Dive: How does the NSX vSwitch Work

Edit: Thank to Ron Flax, Todd Craw for helping me correct some errors.

I have been blessed of late to be involved in some VMware NSX deployments and I am really excited about the technology.   I am by no means a master of NSX but I will post about my understand as a method to spread information and assist with my personal learning.   In this post I will be covering only the switch capabilities of NSX.

 

Traditional Switches

The key element of a layer 2 ethernet switch is the MAC address.  This is a unique (perhaps)  identifier on a network card.  Each network adapter should have a unique address.   A traditional physical switch learns the mac addresses connected on each port when the network device first tries to communicate.  For example:

Lunch

When you power on Windows Physical server the physical switch learns that MAC 00:00:00:00:01:01 is connected to port 1.  Any messaged destined for 00:00:00:00:01:01 should be sent to port 1.   This allows the switch to create logical connections between ports and limit the amount of wasted traffic.   This entry in the switches MAC table (sometimes called a cam table) stays present for 5 minutes (user configurable)  and is refreshed whenever the server uses it’s network card.   The Linux server on port two is discovered exactly the same way via physically talking on the port, the table is updated for port 2.   If Windows wants to talk to linux their communication never leaves the switch as long as they are in the same subnet.   If the MAC address is unknown by the switch it will broadcast the request out all ports with hopes that something will respond.

Address Resolution Protocol (ARP)

ARP is a protocol used to resolve IP addressed to their MAC addresses.  It is critical to understand that ARP does not return the MAC address of the final destination it only returns the mac address of the next hop unless the final destination is on the same subnet.  This is because ethernet is only concerned with next hop via mac not end destination.

Lunch

You can follow the communication with ARP’s between each layer of the diagram the key component is that if the IP is not local then it returns its own MAC and forwards it out the default gateway.

Traditional Virtual Switches

In order to understand NSX vSwitch it is critical that you understand how the traditional virtual switch works.  In a traditional virtual switch (VSS and dVS) the switch learns the mac addresses of virtual machines when they are powered on.  As soon as a virtual machine is assigned a switch port it becomes hard-coded in the MAC table for that virtual switch.   Anything that is local to that switch in the same vlan or segment will be delivered locally.    Otherwise the virtual switch just forwards the message out it’s uplink and allows the physical switches to resolve the connection.

NSX Virtual Switch

The NSX virtual switch includes additional functionality from the traditional virtual switch.  The key feature is the ability to use VXLAN to span layer 2 segments between hosts without the use of multiple streched VLAN’s.   VXLAN also allows strech layer 2 to distant datacenters and up to 16 million segements vs the current limit of 4096 vlans.  There are some common components that need to be understood:

  • VTEP (VXLAN Tunnel End Point)  – this is a ESXi virtual adapter that has its own vlan and ip address including gateway.  This interface must be set for 1600 MTU and all physical switches/routers that handle this traffic must allow at least 1600 MTU.
  • NSX virtual switch (also called logical switch) – This is a software kernel based construct that does the heavy lifting. This is deployed to a dVS switch and works as extensions to the dVS.
  • NSX Manager – This is the management plane for NSX, it acts as a central point for communication, scripting and control.  It is only required when making changes as part of the management plane
  • NSX Control cluster – This is a series of virtual machines that are clustered via software.  Each node (should be a odd number and at least three)  contains all required information and load is distributed between all three.  (Best Practice: Do a DRS rule to keep these on separate hosts, future releases may do this for you)
  • VNI – Virtual network interface – this is an identifier used by VXLAN to separate networks (think vlan tag) they start at 5000 and go to 16,000,000.  It easiest for people to think vlan tags when working with VNI’s.

With all the terminology out-of-the-way it’s time to get down to the path.   The NSX Virtual switch includes one key component the ability to switch packets between nodes or clusters without having the layer 2 streched between the clusters.  For my networking friends this means reduction in spanning tree issues.

So let me lay it out below:

Lunch

We have a three node NSX control cluster that has been deployed.  We have two ESXi hosts running dVS’s with the NSX Virtual switch.  VXLAN has been enabled and a virtual network VNI:5000 has been created.   The VTEP’s have been configured.   We have created two virtual machine as shown in green.  Neither has been connected to the VNI network yet.

 

Time to learn our first MAC:

  • We connect the Windows server to VNI:5000 as shown below
  • The MAC table on our local switch is updated (Learns) then passes it’s learned information to the control cluster
  • The control cluster passes it to all members of the logical switch (there are three methods to pass the information which I will cover in another post unicast, multicast and hybrid)

Lunch

 

This syncing of the MAC table ensures that each member of VNI knows how to handle switching creating a distributed switch (like a switch stack that has multiple switches that act as one).

When we power on the linux server the same method is used:

  • We connect the Windows server to VNI:5000 as shown below
  • The MAC table on our local switch is updated (Learns) then passes it’s learned information to the control cluster
  • The control cluster passes it to all members of the logical switch (there are three methods to pass the information which I will cover in another post unicast, multicast and hybrid)

Lunch

Now we have a ARP table available on each switch that works great.   Let’s follow the flow of communication: Assume the following.   Windows server wants to open a web page on Linux server on port 80:

  • User on Windows server brings up internet explorer and types in 192.168.10.11
  • Windows server sends out a arp entry for 192.168.10.11
  • ESXi1 ‘s virtual switch returns the MAC address 00:00:00:00:02:02
  • Windows server sends out a IP packet with the MAC address of 00:00:00:00:02:02
  • ESXi’s virtual switch forwards the packet out VTEP1 by encapsulating it destined for the IP of VTEP2
  • VTEP2 opens the packet and removes the VTEP encapsulation and forwards the packet to ESXi2 virtual switch on VNI:5000
  • The switch on ESXi2 sends the packet to the virtual port that the linux servers network card is connected on.

 

This is how a NSX virtual switch handles switching.  At first you may say this makes no sense at all… wouldn’t a VLAN just be easier.   There are a number of benefits this brings:

  • Limits your Spanning tree to potentially top of rack switches if architected correctly
  • Allows you to expand past the 4096 VLAN limit
  • Opens the door for other NSX services (which I will post about in the future.)

 

As I mentioned this is just my understanding I do not have inside knowledge if I have made a mistake let me know, I’ll test then correct it.