Quantcast
Viewing all articles
Browse latest Browse all 3

Building a Hyper-V Cluster – Configuring Networks – Part 2/5

Image may be NSFW.
Clik here to view.
PowerShell Commands

# New Network LBFO Team
$NICname = Get-NetAdapter | %{$_.name}
New-NetLbfoTeam -Name LBFOTeam –TeamMembers $NICname -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort -Confirm:$false
# Attach new VSwitch to LBFO team
New-VMSwitch -Name HVSwitch –NetAdapterName LBFOTeam –MinimumBandwidthMode Weight –AllowManagementOS $false

# Create vNICs on VSwitch for parent OS
# Management vNIC
Add-VMNetworkAdapter –ManagementOS –Name Management –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Management)" -NewName Management
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Management –Access –VlanId 10
New-NetIPAddress -InterfaceAlias Management -IPAddress 192.168.0.101 -PrefixLength 24 -DefaultGateway 192.168.0.1 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Management -IPAddress 192.168.0.102 -PrefixLength 24 -DefaultGateway 192.168.0.1 -Confirm:$false
Set-DnsClientServerAddress -InterfaceAlias Management -ServerAddresses 192.168.0.211, 192.168.0.212

# Cluster1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster1 –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Cluster1)" -NewName Cluster1
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster1 –Access –VlanId 2
New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.21 -PrefixLength 24 -Confirm:$false

# Cluster2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster2 –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (Cluster2)" -NewName Cluster2
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster2 –Access –VlanId 3
New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.3.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.3.21 -PrefixLength 24 -Confirm:$false

# iSCSI vNIC
Add-VMNetworkAdapter –ManagementOS –Name iSCSI –SwitchName HVSwitch
Rename-NetAdapter -Name "vEthernet (iSCSI)" -NewName iSCSI
#In this lab we are using one vLAN, typically each subnet gets its own vlan
#Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName iSCSI –Access –VlanId 1
New-NetIPAddress -InterfaceAlias iSCSI -IPAddress 10.0.1.20 -PrefixLength 24 -Confirm:$false
#New-NetIPAddress -InterfaceAlias iSCSI -IPAddress 10.0.1.21 -PrefixLength 24 -Confirm:$false

Cluster Network Roles

In the video we leverage PowerShell to deploy converged networking to our Hyper-V hosts.  We have 2 physical network adapters to work with, but need to implement all of the network roles in the table below so that we will be able to deploy a cluster per best practices.  To accomplish this we create a team and attach a virtual switch.  This vSwitch is shared with the host and the VMs.  The host is given 4 vNICs on the virtual switch to accommodate the various types of network traffic (Storage, Cluster1, Cluster2, Management).  The failover cluster creation process will automatically detect iSCSI traffic on our storage network and set it for no cluster access.  It will also detect the default gateway on the management interface and set that network for cluster use and client use.  This is the network where we will create our cluster network name when the cluster is formed.  The remaining two network are non routed and are used for internal cluster communication.  Cluster communications, CSV traffic and cluster heart beat will use BOTH of these networks equally. One of the networks will be used for live migration traffic. In 2012R2 we have the option of using SMB3 for Live Migration to force the cluster to use both Cluster Only networks if we prefer that to the default compression option.  In the video we don’t care which of the cluster networks is preferred for live migration, so we simply name our networks Cluster1 and Cluster2.

We break the traffic into 4 vNICs rather than just using one because this will help us to ensure network traffic is efficiently utilizing the hardware.  By default the management vNIC will be using VMQ. Because we created the LBFO team using Hyper-V Port the vNICs will be balanced across the physical NICs in the team.  Because the networks roles are broken out into separate vNICs, we can also later apply QoS policies at the vNIC level to ensure important traffic has first access to the network.

When using converged networks, the multiple vNICs provide the ability to fine tune the quality of service for each type of traffic, while the high availability is provided by the LBFO team they are created on. If we had unlimited physical adapters, we would create a team for the Management and a separate team for VM Access Networks. We would use two adapters configured with MPIO for our storage network.  The remaining two cluster network would each be configured on a single physical adapter as failover clustering will automatically fail cluster communication between cluster networks in the event of failures.  Given you number of available physical adapters, you may choose many different possible configurations.  In doing so keep the network traffic and access requirements outlined below in mind.

Network   access type
Cluster Role Purpose of the   network access type Network traffic   requirements Recommended   network access
Storage None Access   storage through iSCSI or Fibre Channel (Fibre Channel does not need a network   adapter). High   bandwidth and low latency. Usually,   dedicated and private access. Refer to your storage vendor for guidelines.
Virtual machine access N/A Workloads   running on virtual machines usually require external network connectivity to   service client requests. Varies Public   access which could be teamed for link aggregation or to fail over the   cluster.
Management Cluster   and Client Managing   the Hyper-V management operating system. This network is used by Hyper-V   Manager or System Center Virtual Machine Manager (VMM). Low   bandwidth Public   access, which could be teamed to fail over the cluster.
Cluster and Cluster Shared Volumes (Cluster 1) Cluster   Only Preferred network used by the cluster for   communications to maintain cluster health. Also, used by Cluster Shared   Volumes to send data between owner and non-owner nodes. If storage access is   interrupted, this network is used to access the Cluster Shared Volumes or to   maintain and back up the Cluster Shared Volumes. Transfer virtual machine   memory and state. The cluster should have access to more than one network for   communication to ensure the cluster is highly available. Usually   low bandwidth and low latency. Occasionally, high bandwidth. Private   access
Live migration (Cluster 2) Cluster   Only High   bandwidth and low latency during migrations. Private   access
Table adapted from Hyper-V: Live Migration Network Configuration Guide

Resources

Networking Overview for 2012/R2
NIC Teaming Overview 2012/R2
Windows PowerShell Cmdlets for Networking 2012/R2

Check out the other post in this series!


Viewing all articles
Browse latest Browse all 3

Trending Articles