How do I explain virtualization to my Mother

IMG_20140730_174059As I have progressed in my career it’s been increasing hard to explain my job to people both inside and outside IT.    There used to be a time when people in IT understood what I did… at this point most people really don’t understand what I do or why.   I have given up explaining it to people I just say I work with computers.    Two years ago while at VMworld the crew from VMware TV stopped me on the street and asked how do you explain virtualization to your mother.   They totally stumped me.   I am lucky my mother has some technology in her life.   She recently got a nook and has discovered she can get books without leaving the house.  For a woman in her 70′s she is about as technically savvy as I can expect.    My religious studies have taught me that analogies can be a great way to teach.   So I present my analogy to explain virtualization.

The Apartment building

Imagine with me that I have just bought a 30,000 square foot housing space.   As the owner I could rent out this space to a single four person family.   They would be very happy and have more space than they could ever use.   It does present some critical problems.  The family would have to be very rich in order to pay for my whole building.   There is no way they could possibly use all the space so there would be lots of wasted space.   In my case if the one family moved out I would have a huge expense that I would have to shoulder until I found another rich family who wanted 30,000 square feet.   I have other issues unless I was very handy I would have to hire someone to fix and repair the apartment when things broke.  This is an expense that is wasted when no one is living in the apartment.   The cost for heating, cooling and powering the apartment would be a huge expense that I would pass on to my single family.  At this point the power bill alone might force the family to move out, once again leaving me to shoulder the whole bill.   In reality running a 30,000 square foot apartment building with a single tenant is a huge risk.  In some neighborhoods it’s totally possible to rent out a space like this to a single family and make a huge profit, either because money is no object or they have some requirement that offsets the costs (like a home office).

The subdivided apartment

I prefer investments with less risk.    After some examination I have discovered that in the neighborhood there is a demand for one, two and three bedroom apartments.   Each type of apartment has some common components: bathroom, a living room and a kitchen.   I create three standard configurations and start to subdivide my building into separate living spaces.  Some of my living space is lost to overhead like hallways and doors.  There are some shared area which represent a space saver for example stairs, elevators and laundry rooms.   Making some area’s shared reduces the lost space to overhead.    I may even consider putting in a pool on the roof to increase the price of my apartments individual rent and increase my profit.   Each of the apartments have their own plumbing with sinks, toilets and showers.   Once these shared components leave your individual space they join the building plumbing and water and utilize shared resources.   It’s important that I take into account the total amount of possible shared utilization at the same time to avoid loss of individual services.   After all if everyone flushes their toilet at 5:00 PM I cannot have the pipes get stuck.   I have to be careful that the individual actions of a single tenant cannot create a failure for all other tenants.   This is one of the key reasons why each apartment has their own water heater, we never want the actions of a single bad neighbor to affect everyone else’s experience.

What does the apartment have to do with virtualization

Virtualization is very much like the apartment.   I have a large computer.  Most of the time it’s 30,000 square feet is about 2% utilized.   If I engineered the correct solution I could utilize the other 98% of wasted space.   Much like humans my applications don’t like to live in the same space.   Virtualization creates separate apartments for each service, these virtual apartments have some shared components and some individual components.   For example I may have shared network connections, power, even portions of memory (hallways) and shared storage (laundry room)  while I have my own water heater (reservation/allocation of resources).  I may have a flash cache on my server (pool on the room) to improve the amenities and encourage higher rent.    All of this is done in a fashion to protect the security of individual families and homes (hypervisor security).   Virtualization has to take into account peak usage to avoid having the pipes filled with you know what at 5:00 PM.   Much like my apartment I need to hire systems administrators to provide care and feeding to my virtualization, the more apartments I deploy the better my cost savings in theory (Yes I know there is diminishing returns when I need more workers)

What does virtualization not have to do with an apartment building

Virtualization brings a few key differences to the table over my apartment building.   It is very costly for me to reconfigure my available space into larger to smaller apartments to fulfill demand, virtualization can do this on demand.   If my apartment burns to the ground due to faulty wiring my families cannot be moved within minutes to another apartment nearby with their furniture and home goods intact.  Virtualization can do that.

Key elements

  • Virtualization is like an apartment building created to make efficient use of large wasted space
  • Virtualization has overhead due to shared components but the overhead uses what would be wasted space so it’s a net gain in most situations
  • Virtualization has limits on shared components and should be sized correctly (no full pipes at 5:00PM)
  • Virtualization is better than single homes in almost every way except one: It is still a shared resource and bad neighbors can still make it unlivable

Death of the sysadmin and birth of….

Sysadmins

I started my career as a sysadmin..  I didn’t want to spend all day sitting in a chair writing applications.  I wanted to touch the hardware.   I am a firm believer that every sysadmin is a control freak to some respect.  They love how the machine obeys them.   They enjoy telling users no you cannot and figuring out ways limit access.  The essence of every good sysadmin is the innate need for improvement.   In the early days of my career I was exposed to systems administrators who had hundreds of shell scripts they had everything automated.   As the years past these older sysadmins seem to be replaced with younger admins who had been raised in an easy world.   They were used to clicking next to install applications and things that just work… (hence the appeal of the iPhone).   I am all for simple and easy gadgets and bringing computers to every old persons life.   The relative ease of the solutions have made life a little too easy for us.

Cloud

Then this crazy thing happened…. the cloud.   Amazon brought the easy button to server deployments.   Some embraced the ease of the solution, others liked the agility.   What ever your motivation for using AWS they have changed IT again.  Everywhere I go business units want to know why it takes so long to deploy a server.   They want to know how to create their own AWS cloud.   People every where have been deploying operating systems and getting IT done without systems admins..

The Auto Industry

When the auto industry first started Henry Ford and his engineers would assemble a car from scratch.  Everyone working on the car understand each component and how it worked.  They understood the flow of assembly to make the car.   Each of the assembly guys could build a car from scratch or design a car.   As time past demand increased for the product and Ford had to increase his agility to create cars.   He hired workers to build cars, assigned them specific roles with rote tasks.   These workers would do the same task over and over again.  This provided a few advantages: first they got good at the task and they did not need to know how to build a whole car.   It also introduced some challenges: if they missed their task due to human error failures were introduced.   Eventually humans workers were replaced with robotics.  This reduced the errors and increased the cost.  It also allowed Ford to build a lot more cars.  Not all the jobs went away they just changed.  Workers were replaced with robotics and automation engineers.   The people working on the cars had no idea how to build cars they just kept the robotics working.   Every other car manufacturer followed suite to compete.   Auto manufacturing plants became huge, downtime cost millions of dollars.   Massive amounts of money are spent to ensure the plants keep running.

What lessons can we learn from Auto Industry

  • Having highly skilled humans build the cars worked great
  • Having architects design cars then hand off work instructions to workers introduced a lot of errors
  • Having automation reduces errors and requires workers with a new skill set
  • It is not required that the people keeping the automation running have an understanding of the product, they just need to understand the automation
  • The cost of automation will force a centralization of building cars
  • As manufacturing became centralized downtime became a critical issue

What does this have to do with sysadmins?

Thanks for bearing with me this far.  If your still reading and wondering why I wrote this article let me explain.  I want to suggest that the world is changing for systems administrators and as control freaks they don’t really like it.   AWS has created a golden standard we can deploy the system in minutes why can’t you?  If you have not faced this question you will soon.   Every shop wants to have AWS.

Every shop wants to have AWS but do they need it?

AWS has a very specific business model.  Deploy base templates for customers then step away and collect cash.   It’s a great model.  Functionality of the virtual product beyond being powered on is 100% your problem.   AWS ensures uptime of power and networking.   You still have to do a lot of work to make that server deployed in minutes usable.   AWS has saved you the time of procurement of hardware and working with silo’ed team to get a server in place, but you don’t have a money-making machine until you install your product.  What is it about AWS that you really need?  I suggest it is not agility instead it’s less hassle.  AWS provides you freedom from people who seem to create never-ending road blocks while doing their job.  Yes, I am looking at you security team.  Yes, I am looking at you server deployment team.  Yes,  I am looking at you…

Why is IT so hard

IT is hard because it’s never the same.  In my career I have rarely seen the same request twice.   If your business is netflix and you have three types of servers then automation makes sense.   Most IT shops are not netflix’s every single business unit wants to drive IT choices and so we get a spaghetti mess of IT.   IT is hard because the business unit wants to drive technical choices instead of business requirements.   Many years ago we had a business unit demand that their new workflow be built in Sharepoint, forget the fact that we were a linux shop with no sharepoint.   So the lesson is:

  • Business unit’s stop messing with IT.  Bring your needs well-defined to IT and let us implement it.  Trust us to do our job it’s why we cost so much.

Why do Menu’s exist

Restaurants have menu’s for the following reasons:

  •  To limit customers options – they could not possibility have all ingredients
  • To help customers make choices –  if left to their own customers would become confused by the options and leave
  • Create standard workflows and realize cost savings
  • Give the customers illusion of choice

Why doesn’t IT have a Menu…. here it comes the ITIL service catalog.   Most service catalogs are too technical and don’t represent what the customer really needs.  What the customer needs is a service which is normally a lot more complex than a single server.  They have a project.

Project

Yep that word again project… it’s so important we have a certification and role who manages it.    Business unit’s rarely want one more netflix streaming server.. they expect IT to handle that if needed.  They want to create a whole new business and that requires a project.   Our menu really needs to be a project menu not a server menu.   We need to stop offering the business unit separate components of our offering or they will keep getting into our business.  We need to provide the business unit the correct choices that keep them away from dictating technology.

Death of a sysadmin … birth of a ..process engineer

So now that I have ranted for too long what is the future of systems administration.   I think we need to become process engineers.   Very few people are going to understand the whole product.  More will administrate from a automation console rather than logging into a server.   How do we re-tool for this change? I have a few suggestions:

  • Learn to examine process.  Do something manually first.  Document the process in extreme detail, use a process diagram. Critically look at your process diagram.  Do you see how many manual processes you have?   How can you automate them.
  • Help customers standardize, learn the language stop jumping to technical solutions with your customers.  Focus on their needs and requirements allow the technology to be a black box.
  • Develop standard methods for documenting and ingesting new projects… create a documented process and follow it.
  • Automate everything you can, develop solution with the automation mind set.  How would I do this if I had to deploy 100 servers instead of two.
  • Ask your self does this process, technology or choice scale up?   If I had to increase the amount of these by 1,000 would this process work.

Well thanks for reading my rant.  Let me know where I am wrong.

Design Scenario: Gigabit network and iSCSI ESXi 5.x

Many months ago I posted some design tips on the VMware forums (I am Gortee there if you are wondering).   Today a user updated the thread with a new scenario looking for some advise.  While it would be a bad idea personally and professionally for me to give specific advise without a design engagement I thought I might provide some thoughts about the scenario here.  This will allow me to justify some design choices I might make in the situation.   In no way should this be taken as law.  In reality everyone situation is different and little requirements can really change the design.   The original post is here.

The scenario provided was the following:

3 ESXI hosts (2xDell R620,1xDell R720) each with 3×4 port NICS (12 ports total), 64GB RAM. (Wish I would have put more on them ;-))

1 Dell MD3200i iSCSI disk array with 12 x 450GB SAS 15K Drives (11+1 Spare) w/2 4 port GB Ethernet Ports

2 x Dell 5424 switches dedicated for traffic between the MD3200i and the 3 Hosts

Each host is connected to the iSCSI network though 4 dedicated NIC Ports across two different cards

Each Host has 1 dedicated VMotion Nic Port connected to its own VLAN connected to a stacked N3048 Dell Layer 3 switch

Each Host will have 2 dedicated (active\standby) Nic ports (2 different NIC Cards) for management

Each Hosts will have a dedicated NIC for backup traffic (Has its own Layer 3 dedicated network/switch)

Each host will use the remaining 4 Nic Ports (two different NIC cards) for the production/VM traffic)

 would you be so kind to give me some recommendations based on our environment?

Requirements

  • Support 150 virtual machines
  • Do not interrupt systems during the design changes

Constraints

  • Cannot buy new hardware
  • Not all traffic is vlan segmented
  • Lots of 1GB ports per server

Assumptions

  • Standard Switches only (Assumed by me)
  • Software iSCSI is in use (Assumed again by me)
  • Not using Enterprise plus licenses

 

Storage

Dell MD3200i iSCSI disk array with 12 x 450GB SAS 15K Drives (11+1 Spare) w/2 4 port GB Ethernet Ports

2 x Dell 5424 switches dedicated for traffic between the MD3200i and the 3 Hosts

Each host is connected to the iSCSI network though 4 dedicated NIC Ports across two different cards

I personally have never used this array model, the vendor should be included on the design to make sure none of my suggestions here are not valid with this storage system.  Looking at the VMware HCL we learn the following:

  • Only supported on ESXi 4.1 U1 through 5.5 (no 5.5 U1 yet so don’t update)
  • You should be using the VMW_PSP_RR (Round Robin) for path fail over
  • The array supports the following VAAI natives Block Zero,Full Copy,HW Assisted Locking

The following suggestions should apply to physical cabling:

Storage

Looking at the diagram I made the following design choices:

  • From my limited understanding the array the cabling follows the best practice guide I could find.
  • Connection from the ESXi hosts to switches are done to create as much redundancy as possible including all available cards.  It is critical that the storage be as redundant as possible.
  • Each uplink (physical nic) should be configured to connect to an individual vmkernel port group.  Each port group should be configured with only one uplink.
  • Physical switches and port groups should be configured to use native port assuming these switches don’t so anything other than provide storage traffic between these four devices (three ESXi and one array)  if the array and switch is providing storage to more things you should follow your vendor’s best practices for segmenting traffic.
  • Port binding for iSCSI should be configured as per VMware document and vendor documents

New design considerations from storage:

  • 4 1GB’s will be used to represent max traffic the system will provide
  • The array does not support 5.5 U1 yet so don’t upgrade
  • We have some VAAI natives to help speed up processes and avoid SCSI locks
  • Software iSCSI requires that forged transmissions be allowed on the switch

Advise to speed up iSCSI storage

  • Bind your bottle neck – is it switch speeds, array processors, ESXi software iSCSI and solve it.
  • You might want to consider Storage DRS on your array to automatically balance load and IO metrics (requires enterprise plus license but saves so much time) – Also has an impact on CBT backups making them do a full backup.
  • Hardware iSCSI adapters might also be worth the time… thou they have little real benefit in the 5.x generation of ESXi

 

Networking

We will assume that we now have 8 total 1GB ports available on each host.   We have a current network architecture that looks like this (avoided the question of how many virtual switches):

network

I may have made mistakes from my reading a few items pop out to me:

  • vMotion does not have any redundancy which means if that card fails we will have to power off VM’s to move them to another host.
  • Backup also does not have redundancy which is less of an issue than the vMotion network
  • All traffic does not have redundant switches creating single points of failure

A few assumptions have to be made:

  • No single virtual machine will require more than 1Gb of traffic at any time (otherwise we have to be looking into LACP or etherchannel solutions.
  • Management traffic, vMotion and virtual machine traffic can live on the same switches as long as they are segmented with VLAN’s

 

Recommended design:

Drawing1

  • Combine the management switch and VM traffic switch into dual function switches to provide both types of traffic.
  • This uses vlan tags to include vMotion and management traffic on the same two uplinks providing card redundancy (configured active / passive)  Could also be configured with multi-nic vMotion but I would avoid due to complexity around management network starvation in your situation.
  • Backup continues to have it’s own two adapters to avoid contention

This does require some careful planning and may not be the best possible use of links.   I am not sure you need 6 links for your VM traffic but it cannot hurt.

 

Final Thoughts:

Is any design perfect?  Nope lots of room for error and unknowns.  Look at the design and let me know what I missed.  Tell me how you would have done it differently… share so we can both learn.  Either way I hope it helps.

Deep Dive: Network Health check

vSphere 5.1 introduced one of my favorite new features.  Network health check.  This feature is designed to identify problems with MTU and VLAN settings.   It is easy enough to set up MTU and VLAN’s in ESXi especially with a dVS.  In most environment the vSphere admins don’t control the physical switches making confirmation of upstream configuration hard.    The health check resolves these issues.  It is only available on dVS switches and only via the web client. (I know time to start using that web client.. your magical fat client is going away) If you have an upstream issue with MTU then you will get an alert in vCenter.   You can find the health check by selecting the dVS and clicking on the manage tab.  On the middle pane you will see Health check which you can edit and enable.   You came here because you want to know how it works.

 

MTU

MTU check is easy.   Each system sends out a ping message to the other nodes.  This ping message has a special header that tells the network not to fragment (split) the packet.   In addition it has a payload (empty data) to make the ping the size of the max MTU.   If the host get’s a return message from the ping it knows the MTU is correct.  If it fails then we know MTU is bad.   Each node checks it’s MTU at an interval.   You can manually check your MTU with vmkping but the syntax has changed between 5.0,5.1 and 5.5 so look up the latest syntax.

 

VLAN

Checking the VLAN is a little more complex.    Each VLAN has to be checked.   So one host on the same vDS (not sure which one but I am willing to bet it’s the master node) sends out a broadcast layer 2 packet on the VLAN.  Then it waits for each node to reply to the broadcast via unicast layer 2 packet.   You can determine which hosts have VLAN issues based upon who reports back.   I assume that host marked as bad then try’s to broadcast as a method to identify failed configuration or partitions.   This test is repeated on each VLAN and at regular intervals. It only works when two peers can connect.

Teaming policy

In ESXi 5.5 they added a check for teaming policy to physical switch.  This check identifies mismatches between IP Hash teaming and switches that are not configured in etherchannel/LACP.

 

Negative Effect of Health check

So why should I not use health check?  Well it does produce some traffic.  It does require you to use the web client to enable and determine which vlan’s are bad…  otherwise I cannot figure out a reason to not use it.   A simple and easy way to determine issues.

Design Advice on health check

Health check is a proactive way to determine upstream vlan or MTU issues before you deploy production to that VLAN.  It saves a ton of time when troubleshooting and fighting between networking and server teams.  I really cannot see a reason to not use it.    I have not tested the required bandwidth but it cannot be huge.   My two cents turn it on if you have a vDS… if you don’t have vDS I hope you only have ten or less VLAN’s.

Deep Dive: vSphere Traffic Shaping

Traffic Shaping is all about the bad actor scenario.  We have 100′s of virtual machines that all get along with each other.  The application team deploys a appliance that goes nuts and starts to use it’s link 100%.  Suddenly you get a call about database and website outages.  How do you deal with the application teams bad actor?  This is the most common reason why every apartment has it’s own water heater.   My wife would be very unhappy if she could not take her hot shower in the morning because Bob upstairs took an extra long shower an hour ago.   Sharing resources are great as long as resources are unlimited, not over provisioned or usage patterns stay static.  In a real world none of those things are true.  You are likely limited on resources, over provisioned and your traffic patterns change every single day.   Limits allow us to create constraints upon portions of resources in order control bad actors.

Limits (available on any type of switch)

Limits are as expected limits that a machine cannot cross.  This allows a machine to see a 10GB uplink but only use 1GB at most.  This injected slow down is into the communication stream via normal protocol methods.   The limit settings in VMware can be applied on the port group or on dvPort or dvPort Group.  Notice the difference on dVS switches we can apply limits on ports as well as port groups.  Limits can be applied on standard switches via outbound traffic while a dVS can be inbound and outbound.  There are three options on limits:

  • Average bandwidth = Average number of  bit’s per second to allow across the port
  • Peak bandwidth – Max bits per second to allow across a port when it’s utilizing it’s burst traffic, this limits the bandwidth used by the port when using it’s burst.
  • Burst Size – Max bits per second to allow in a burst.  This is the number of bytes allocated to burst when allocation over the average is required.  This can be viewed as a bank when you don’t use all your average bandwidth it can be stored up to the burst size to be used when needed.

 

Limits of the Limits

Limits produce some well… limits.   Limits are always enforced.  Meaning even if bandwidth is  available it will not be allocated to the port group/ port.  Limits on VSS’s are outbound only meaning you can still flood a switch.  Limits are not reservations.  Machines without limits can consume all available resources on a system.  So effectively limits are only useful to stop a bad actor from everyone else.  It is not a sharing method.  Limits on network do have their place but I would avoid general use if possible.

 

Network IO Control a better choice

Network IO Control (NIOC) is available only on the vDS switch.  It provides a solution to the bad actor symptom while providing flexibility.  NIOC is applied to outbound traffic.  NIOC works very much like resource pools with compute and memory.  You setup a NIOC share (resource pool) with a number between 1 and 100.   vSphere comes with some system defined NIOC shares like vMotion and management.  You can also defined new resource pools and assign them to port groups.  NIOC only comes into play during times of contention on the uplink.  All NIOC Shares are calculated on a uplink by uplink basis.  All the active traffic types on the uplink shares are added together.  For example assume my uplink has the following shares:

  • Management 10
  • vMotion 20
  • iSCSI 40
  • Virtual machines 50

If contention arises and only Management, iSCSI and virtual machines are active we would have 100 total shares.  This number is then used to divide the total available bandwidth on that uplink.  Let’s assume we have a 10GB uplink.  The each active traffic type would get based on shares:

  • Managment 1GB
  • iSCSI 4GB
  • Virtual machines 5GB

This example also assumes they are using 100% of their available links.  If management is only using 100MB the others will get it’s left over amount divided by their share amount (in this case 900mb/90 then 40 assigned to iSCSI and 50 assigned to virtual machine).   If a new traffic type comes into play then the shares are recalculated to meet the demands.   This allows you to create worst case scenarios to ensure traffic types for example:

  • Management will get at least 1GB
  • vMotion will get at least 2GB
  • iSCSI will get at least 4GB
  • Virtual machines will get at least 5GB

There is one wrinkle to this plan with multi-nic vMotion but I will address that in another post.

 

Design Choices

Limits have their uses.  They are hard to manage and really hard to diagnose… Imagine coming into a vSphere environment where limits are in place but you did not know.   It could take a week to figure out that was causing the issues.   My vote use them sparingly.   NIOC on the other hand should be used in almost every environment with Enterprise Plus licenses.   It really has no draw back and provides controls on traffic.

Deep Dive: vSphere Network Load Balancing

In vSphere load balancing is a hot topic.   As load size per physical host increases so does the need for more bandwidth.  In a traditional sense this was done with etherchannel or LACP.  This bonds together multiple links so they link and act like a single link.   This helps avoid loops.

What the heck is a loop?

A loop is anytime two layer 2 (ethernet) endpoints have multiple connections to each other.

 

It is possible with two virtual switches to create a bridged loop if care is not taken.   Virtual switches by default will not create loops.  On the physical switch side protocols like spanning tree were created to solve this link issue.  STP disables a link if a loop is detected.  If the enabled link goes down STP turns on the disabled link.   This process works for redundancy but does not do anything if link 1 is not a big enough pipe to handle the full load.    VMware has  provided a number of load balancing algorithms to provide more bandwidth.

Options

  • Route Based on Originating virtual port (Default)
  • Route Based on IP Hash
  • Route Based on Source MAC Hash
  • Route Based on Physical NIC Load (LBT)
  • Use Explicit Failover Order

 

In order to explain each of these options assume we have a ESXi host with two physical network cards called nic1 and nic2.   It’s important to understand that the load balancing options can be configured at the network switch or port group level allowing for lots of different load balancing on the same server.

Route Based on Originating virtual port (Default)

The physical nic to be used is determined by the ID of the virtual port to which the VM is connected.  Each virtual machine is connected to a virtual switch which has a number of virtual ports, each port has a number.   Once assigned the port does not change unless the host changes ESXi hosts.  This number is the virtual ID.   I don’t know the exact method used but I assume it’s something as simple and odd’s and evens for two nics.  Everything odd goes to port 1 while even goes to port 0.  This method has the lowest overhead from a virtual switch processing, and works with any network configuration.  It does not require any special physical switch configuration.  You can see though it does not really load balance.  Lets assume you have a lot of port groups with only virtual machine on port 0.  In this case all virtual machines would use the same uplink leaving the other unused.

Route Based on IP Hash

The physical nic to be used is determined by a hash of the source and destination IP address.   This method provides load balancing to multiple physical network cards from a single virtual machine.  It’s the only method that allows a single virtual machine to use the bandwidth of multiple physical nics.  It has one major draw back the physical switches must be configured to use etherchannel (802.3ad link aggregation) so they present both network links as a single link to avoid problems.   This is a major design choice.  It also does not provide perfect load balancing.  Lets assume that you have a application server that does 80% of it’s traffic with a database server.  Their communication will always happen across the same link.  They will never use the bandwidth of two links.  Their hash will always assign them the same link. In addition this method uses a lot of CPU.

  • When using etherchannel only a single switch may be used
  • Beacon probing is not supported on IP Hash
  • vDS is required for LACP
  • Troubleshooting is difficult because each destination/source combination may take a different path.  (Some virtual machine paths may work with others will not in a non-consistent pattern.)

Route Based on Source Mac Hash

The physical nic to be used is determined by a hash created from the virtual machines source address.  This method provides a more balanced approach to load balancing than originating virtual port.  Each virtual machine will always use only a single link but load will be distributed.  This method has a low CPU overhead and does not require any physical switch configuration

Route Based on Physical NIC Load (Distributed Virtual Switch Required also called LBT)

The physical nic to be used is determined by load.  The nics are used in order (nic1 then nic2)  No traffic will be moved to nic2 untile nic1 is utilized above 75% capacity for 30 seconds.  Once this is achieved traffic flows are moved to the next available nic.  They will stay at that nic until another LBT event happens moving traffic.   LBT does require the dVS and some CPU overhead.  It does not allow a single virtual machine to gain more than 100% of a single link speed.   It does balance traffic among all links during times of contention.

Use Explicit Fail over

The physical nic to be used is determined by being the highest nic on the list of available nics.  The others will not be used unless the first nic is unavailable.  This method does no load balancing and should only be used is very special cases (link multi-nic vMotion).

 

Design Advice

Which one should you use?  It depends on your need.  Recently a friend told me they never changed the default because they never get close to using a single link.   While this method has merit and I wish more people understood their network metrics you may need to plan for the future.  There are two questions I use to determine which to use:

  • Do you have any virtual machines that alone require more than a single links bandwidth? (If yes then the only option is IP Hash and LACP or etherchannel)
  • Do you have vDS’s? (If yes then use Route based on physical nic load, if no then use default or source MAC)

Simply put the LBT is a lot more manageable and easy to configure.

Do IT certifications really matter?

Twice in the last week people in IT have asked this question of me.  My answer has been it depends.  When I first started my career I hated certifications.  This is mostly because in college I attended a Microsoft certification course.   This course was a memorize the content don’t worry if you don’t understand type of test/course.   It seemed pointless to me… I passed the test and still had never worked with half the stuff I was tested on.   The memorized information was soon lost and nothing other and a piece of paper was gained.   This tainted my view toward certifications.  For many years I did not see the point and avoided them.   A few years ago an employer encouraged me to get a VMware certification.  They also offered to pay.   So I took them up on the offer and got the VCP certification.   The required course for the certification was good because it allowed a lot of time for question and answer sessions.   The instructor knew the material very well.   It was a good course.  With a little additional study I passed the test and had another IT certification.

What did I learn?

Knowing I was going to have to take the VCP test made my course learning more meaningful.   I was able to learn with intent.   I now realized that certifications might not have value but the knowledge did…  So since that time I have used certifications to motivate myself to learn.

Wait… certifications should translate into more money right?

While it is true my jobs continue to pay more as time goes along I do not believe this is because of my certifications.  I think it’s because of what I learned while doing the certifications.   Will certifications ensure more money?  Not always.   But more knowledge and skills will translate to more ability to do.

So you convinced me … what certs should I do?

Well here is the tough one.  I can tell you what certifications I see a lot of resumes and job postings:

  • ITIL – This one is on every resume.  Buy a book off Amazon and take the test… it’s not hard and people want it a lot.
  • VMware certification – Virtualization is hot… but only a few places have virtualization only admins..  VCP is normally enough.  VCAP and above are not seen much on job postings.  (Don’t get me wrong I am all about geeking out with VMware certs… as shown by my VCDX but in translation to jobs VCAP will not help you more than VCP… VCDX will but it’s a long journey)  Best fun test on that journey VCAP-DCA (it’s a live test that makes you do it’s so much fun)
  • RedHat certification (normally RHCE) redhat is still the leader in enterprise linux and their cert is a practice test that requires that you do things not just know them.
  • Windows Certification – They are a lot better than they used to be and look great for Windows jobs
  • PmP – if you want to get into technical project management this is the cert.
  • CCNA – If you are interested in networking start here… even if you don’t have Cisco in your shop.

 

Live Tests

My final note is a shout out to all testing systems that require you to work with a real environment like the VCAP-DCA, CCNA or RHCE.  These tests require you know how to do things and are awesome.  No pointless memorization required.  We need more IT tests like this…