Digital Nervous System Making Incredible Software, Incredibly Simple

DDLAB–Cluster The VM Hosts

12And now we start to get into the fun stuff. As we progress tough this post we are going to ensure that both of the VM hosts actually have Hyper-V installed; after all this is an important component of the Dynamic Data Centre.

After we have installed and validated that we have indeed a good Hyper-V base, we are going to proceed and bring online our cluster and mount up our CSV Volume.

So there you have it, a lot of really cool stuff to sort out today. Time to get busy.

Hyper-V

We need to get the Hyper-V Role installed on this server if it to be of any value in the lab; so lets do that first so we can be sure there are no hardware issues we need to panic over, and not waste time getting all the other parts together.

Import-Module ServerManager

Add-WindowsFeature Hyper-V

Shutdown /r

After the server has rebooted, we should see the Hyper-V role listed in ServerManger and from the PowerShell we can confirm that all the service have indeed started up correctly

Get-Service –DisplayName *Hyper*

image

Networking

Now before we progress into actually installing the cluster, we are going to take another look at the networks on these physical hosts. It is important that we understand how we plan to cable this up, as we will be assigning these interfaces over the course of this post.

Each of my nodes has 3 * 1Gb NIC’s; from this we have already been using the first NIC for both our normal ‘Management’ usage and also all our iSCSI Traffic, I Labelled this NIC as ‘Management and iSCSI’ so that it is easy identify.

The remaining two interfaces will be used for the services

  • Cluster Heartbeat

    • This will be a private link between both nodes; Running at a minimum of 1Gb as this interface will double up for the Migration of VMs from Node to Node
  • Hyper-V VM Switch

    • Connected back to our network switch, we will use this for the communications link from all our virtual machines. In a full deployment this might be a team of NIC, or even 10Gb interfaces; quite likely configured to use LACP so that we can pass tags of our VLAN trough to the Switch from the VM.

Referring to the Physical Nodes table again, you will see that I only issued IP addresses for both the “Management and iSCSI” and “Cluster Heartbeat” interfaces.

**Node** **LAB-SVR01** **LAB-VM01-01** **LAB-VM01-02**
**LAN** LAN: 172.16.100.10 LAN: 172.16.100.100 HB: 192.168.10.100 LAN: 172.16.100.101 HB: 192.168.10.101

But the “Hyper-V VM Switch” I assigned no information for, as this will be used to ‘Pass Trough’ network traffic from the VMs to the Switch, so I will unbind the host operating system form using this interface for IP traffic.

Configure Hyper-V Switch

Now before we progress into actually installing the cluster, we are going to take another look at the networks on these physical hosts. It is important that we understand what we plan to cable this us, as we will be assigning these interfaces over the course of this post.

Each of my nodes has 3 * 1Gb NIC’s; from this we have already been using the first NIC for both our normal ‘Management’ usage and also all our iSCSI Traffic, I Labelled this NIC as ‘Management and iSCSI’ so that it is easy identify.

The remaining two interfaces will be used for the services

  • Cluster Heartbeat

    • This will be a private link between both nodes; Running at a minimum of 1Gb as this interface will double up for the Migration of VMs from Node to Node
  • Hyper-V VM Switch

    • Connected back to our network switch, we will use this for the communications link from all our virtual machines. In a full deployment this might be a team of NIC, or even 10Gb interfaces; quite likely configured to use LACP so that we can pass tags of our VLAN trough to the Switch from the VM.

Referring to the Physical Nodes table again, you will see that I only issued IP addresses for both the “Management and iSCSI” and “Cluster Heartbeat” interfaces.

**Node** **LAB-SVR01** **LAB-VM01-01** **LAB-VM01-02**
**LAN** LAN: 172.16.100.10 LAN: 172.16.100.100 HB: 192.168.10.100 LAN: 172.16.100.101 HB: 192.168.10.101

image

But the “Hyper-V VM Switch” I assigned no information for, as this will be used to ‘Pass Trough’ network traffic from the VMs to the Switch, so I will unbind the host operating system form using this interface for IP traffic.

Configure Hyper-V Switch

Launch the Hyper-V manager, and from the console on the Actions **menu we will click on the option **Virtual Network Manager…

image

This will popup the Virtual Network Manager. **On the right hand pane we have **Create virtual network **section, in the option list **What type of virtual network do you want to create? we will highlight the option External and click on the button Add.

image

The interface will update on the right hand pane we now see **New Virtual Network **section

  • Name field we will type VM Switch

  • Connection Type

    • Select the External Adapter and chose the Network Interface which is to be used for the Hyper-V VM’s. We can identify the correct Interface by viewing adapter in Control Panel Network and Internet Network Connections

    • In the **Allow management operating system to share the network adapter **we will clear the check box as we already have management traffic on an interface of its own.

image

We can click on OK and we will then be presented with a warning that we will lose management access to the node. We will ignore this, since we know that we are not using this interface for management. Click on Yes to apply our changes.

image

Don’t forget we have 2 nodes, so be sure to repeat the process on the second server.

Failover Clustering

Fantastic, that’s the last of the main requirements in place. Let’s move on and start up with our clustering effort.

Installing Failover Clustering

The Feature for Failover Clustering is not installed on our servers by default so we will just need to quickly add the feature from our PowerShell interface.

Import-Module ServerManager

Add-WindowsFeature Failover-Clustering

This should take only a few moments, and does not normally require a reboot.

Create The Cluster

So, as soon as the Feature is in place, as we Find and Launch the “Failover Cluster Manager” Interface, and set about the process of Building the Cluster.

image

As this is a new cluster, we do not have a lot to see in the interface to begin with. If we focus on the Actions in the Right pane of the window we have an option to select called Create a Cluster… which will popup for us a new wizard to work with

image

We can read trough the introduction text, and once you are happy _Click** _on the **Next button, which will get is started on the process.

image

On the Select Servers **screen, we are asked to identify which servers we will be adding to create the cluster. So in this case we will simply enter the FQDN of the two VM hosts we are using to build the cluster – as a reminder those are **LAB-VM01-01.damainflynn.demo **and **LAB-VM01-02.damainflynn.demo. **After both have been added we can Click on the **Next button again

image

The next step in the Wizard, is a Validation Warning, basically telling us that since this is a new configuration we really should run a validation on the two servers we are about to join into the cluster, so that we can be sure that the rest of the process should run pretty smooth.

I will agree with the Wizard, and allow it to Launch the Validation tools for us, as a good health check.

image

Cluster Validation Wizard

The Create Cluster Wizard will now launch the Cluster Validation Wizard for us, presenting the welcome page so we can get an overview of the checks which will be executed.

image

After Clicking on Next we are asked if we would like to run all the tests or select some specific ones. I am going to run the full suite as its important that we check everything is good.

image

Next, we will be presented with the names of the servers which are going to be checked for us, and a list of the checks which will be carried out for us.

image

After clicking on Next the wizard will get down to work, the check will take a few minutes to complete, but you can watch it progress trough each one if you like.

image

After the work is complete, the wizard will then present the results, if we have any hope of reading this information we will click on the button View Report

image

Internet Explorer will then Launch and we can read trough the report to see if everything is good. If you have been following along then I don’t expect you to have issues, but the report should be pretty clear about any issues it does find and how you might fix them

image

I have a clean bill of health, so I will close the Browser, and the Configuration Wizard should now also be closed, so we can resume back where we left off on the Create Cluster Wizard

In the Create Cluster Wizard we now get to give the cluster and Name, and an IP address. For the lab we will be using the following

  • Cluster Name: LAB-VM01

  • Cluster IP Address: 172.16.100.99


Lets add this information into the Wizards interface and we can then continue to the next stage.

image

The obligatory confirmation check is then presented so that we be sure we have selected the correct node, cluster name and the IP address.

image

Once you click on **Next **the wizard will start its work, creating the cluster for us. This will take a few minutes to complete.

image

After the wizard is done processing, we will be presented with the final summary which I expect should be a confirmation that You have successfully completed the Create Cluster Wizard. **We can now click on **Finish to return back to the Failover Cluster Manager.

image

Congratulations, nice work.

Create the Quorum

Next step is to get the Quorum online. In the GUI we can now see the basic cluster services are online and ready. We will work from here to setup the Quorum first,

image

Right Click on the Cluster Server Name LAB-VM01.damianflynn.demo and from the properties menu select More Actions… and then Configure Cluster Quorum Settings…

image

Guess What… Its a new Wizard, take a few moments to ready the introduction and when you are ready click on Next

image

On the next page we have to select how we would like the Quorum operate. There are a few options presented, but you will see from the Wizard the best solution for us is to select Node and Disk Majority

image

Now, we need to select the disk which we assigned for use as the Quorum by Ticking the Check box. Click Next once ready

image

We are presented with a Confirmation view so we can double check everything as as we need. Click on Next to set the wizard to work.

image

The process should be pretty fast, and before we know we will be looking at the summary page.

image

After we click on Finish, the status will update to show we are now configured with Node and Disk Majority ( using Cluster Disk 1)

image

##

Create the Cluster Shared Volume

We are almost complete, the next step is to configure our Cluster Shared Volumes (CSV). We right click on the Cluster Server Name LAB-VM01.damianflynn.demo and from the properties menu select Enable Cluster Shared Volumes…

image

As this is our first time enabling Cluster Shared Volumes we will be presented with a notice. After reading this we can click on I have read the above notice and then click on OK

image

The Failover Cluster Manager will update and we will now see a new node in the left tree called Cluster Shared Volumes. **We will Right Click on the node and select the option **Add Storage from the context menu.

image

This will present us with a Dialogue offering the disk we have prepared for use as the CSV storage. We will select the disk by Ticking the Check box. Click OK once ready

image

After the Dialogue closed, the Cluster Manager will process the Add Storage request

image

After a few moments we will get confirmed that the CSV Volume is now Online, and we can see that the storage is mapped to C:CusterStorageVolume1 and ready for our Usage.

image

Fantastic stuff, we have completed the build of our cluster, and successfully create a Cluster Shared Volume for hosting our Virtual Machines.

Hyper-V Testing

We have done well today, but if your like me, you are curious to see how this works, I will create a VM on the cluster to validate everything works and prove that we can indeed Live Migrate between the nodes.

Create a Virtual Machine

To do this, We right click on the Services and applications node, from the context menu, select Virtual Machines… **then select **New Virtual Machine… and finally we will select either of our two nodes LAB-VM01-01 **or **LAB-VM01-02

image

This will launch the New Virtual Machine Wizard for us, after reading the first page we will click on Next

image

On the next page we will provide a name for the Virtual Machine, I am going to create one of the next nodes we will be using shortly in the LAB, which will the SCVMM installation. In the Name field we will enter in this case LAB-SCVMM and in the location we will be using our Cluster Storage Volume, so I will browse to C:ClusterStroageVolume1

image

Next we have to assign the RAM we will use on the VM, and click on Next

image

The Configure Networking page has a drop down, where we can select the VM Switch we created earlier in Hyper-V Manager, and then click on Next

image

On the Connect Virtual Hard Disk page we get to define, or used the suggested Virtual Hard Disk information.

  • Name – LAB-SCVMM.vhd

  • Location – C:ClusterStorageVolume1LAB-SCVMM

  • Size – 35 GB


image

We now get the opportunity to select if we want to install an operating system. I am going to provide the ISO file for Windows 2008 R2 which I will need on my SCVMM Server

image

Finally, we will get a Summary screen to check our choices, and then we can click on Finish to create the Virtual Machine.

image

The Wizard will now start processing the request

image

And pretty quickly we will see the Virtual Machine creation summary

image

After we click on Finish we will see the Virtual Machine in the Services and applications list.

image

Start the Virtual Machine

We will start up the Virtual Machine and install Windows on the node

image

We will then see the server update, to confirm that it is trying to start and finally go Online.

image

If we right click on the Name of the VM, we can select to open the console, and we should now see the Windows Installer loaded from our setup ISO file.

image

Migration

We can now also test that the Live Migration function works. Again highlight the name of the Virtual Machine, and this time select the option Live migrate virtual machine to another node and then select the other node in our cluster

image

Assuming we have no problems, we should now set the status change to let us know that the VM is Migrating

image

And a few seconds later we should see the Status return to Online but the Current Owner will have changed to the other node in our Cluster

image

Wrap Up

Well, you should congratulate yourself if you have made it to this point and everything is working. you have all the main foundations in place to virtualisation. In the next few posts we will deploy some of the Microsoft Management tool suites which make up part of System Centre and permit us to move from a Virtualised Data Centre to a Dynamic Data Centre.

Be social and share this post!
Share via OneNote