Y-Corner

On the Edge of the Network

Single post

vSphere 5.0: Setting Up the Virtual Machine Environment

Setting up the virtual machine environment in ESXi 5.0 is fairly straight forward. Like ESXi 4.0, ESXi 5.0 is managed remotely via the vSphere 5.0 hypervisor client. I’ve assigned a static IP address to the whitebox ESXi. Using the assigned IP address and the user credentials created in the ESXi 5.0 settings menu, we can use VMware’s vSphere 5.0 Hypervisor client to connect to the whitebox ESXi server over the network. I installed vSphere 5.0 on my trusty Zenbook UX21E to access the whitebox ESXi server remotely.

After logging into vSphere 5.0, the first thing you will see is a category screen, with options to see your inventory (your ESXi server), roles (users), and system logs. From inventory, you can select your ESXi server.

After selecting the whitebox server, you are given several tabbed options, one of which is “Summary”, which gives you an overall hardware description of your ESXi server. Details such as processor model, motherboard model, CPU cores, CPU usage, memory usage, etc., are listed. With the whitebox setup that I am utilizing, what’s probably the most interesting part of this section is that ESXi 5.0 is reporting that DirectPath I/O (IOMMU) is supported (woot!) on the GA-970A-UD3, which allows us to passthrough hardware devices directly to the VMs. Not too many consumer motherboards support this feature, which makes this even more exciting.

Moving on, before you can start creating VMs, a storage space must be created. As this whitebox server has two hard drives (a Seagate Barracuda ES 1 TB 7200 RPM and WD 80 GB 7200 RPM SATA HDD), we can assign storage spaces to these devices by going to the “Configuration” tab and selecting “Storage”. Here in this section, there is an option to “Add Storage”, in which we can specify a storage device to utilize. Clicking on this option, it gives us a choice to select the type of storage, such as a local disk, iSCSI, etc. or a Network File System (NFS), but  simplicity of this setup, we will be using the local drives installed on this server. In the next screen, a list of all detected hard drives will be shown, from which we can select a hard drive to create a storage space. After selecting a drive, an option to specify whether to utilize VMFS-5 or VMFS-3 will be shown. As I won’t be utilizing legacy operating systems that have issue utilizing storage spaces larger than 2TB, I selected VMFS-5. After going through the storage spaces setup prompts, your newly created storage space will show up in the “Storages” section. Please note that this must be done every time if you wish to utilize additional storage devices.

With a storage space created, we can now create virtual machines! In the “Virtual Machines” section, we can create a new VM. In the new VM wizard, we can specify if it is a typical or custom VM. The “Typical” option used for common operating systems such as Windows XP, whereas the “Custom” option is used for situations where additional options need to be specified, or in cases where a unsupported OS will be installed. From here, we can specify options such as the name of the VM, the VM OS type (i.e. Windows, Linux, Other), the storage space it will be installed to, the network type & adapter that will be utilized, and the amount of storage space assigned to the VM.

Once all of these options have been confirmed, ESXi 5.0 will take a moment to create the VM onto the storage space that you have specified.

Once the VM has been created, it will appear in the “Virtual Machines” list. You can make changes to the VM’s virtualized hardware, such additional storage space, optical drives, network options, etc. I decided to install Windows Server 2008 as my first VM, as I want to experiment with some of the  server roles like Active Directory, Domain Controller, Sharepoint, SQL, etc. Through the optical device options of the VM, I selected the Windows Server 2008 ISO image, from which to boot from to initialize the installation. Within the VM, I went through the typical Windows Server 2008 installation wizard, which progressed without any issues. After twenty minutes of installation, the Windows Server 2008 VM was ready to be utilized!

In the next portion, I will go over more detail in managing multiple VMs on ESXi 5.0.

Comments: 3

  1. @cortexbomb:
    I checked the BIOS version on the GA-970A-UD3 motherboard and the BIOS version is F4. If you are running a higher BIOS version, you might want to see if it’s possible to down-flash to the F4 version, as Gigabyte might have inadvertantly broken IOMMU support in later BIOS versions.

  2. Hi cortex, welcome to Y-Corner! For the GA-970A-UD3, I only enabled the virtualization and IOMMU options. I did not update the BIOS and am using the stock BIOS. As I’m away on a trip, I can’t check the BIOS version that I’m using right now, but I’ll check the BIOS version when I get back.

    It is possible that IOMMU support may have been broken in the newer BIOS updates.

  3. Just threw together a whitebox based on this motherboard as well, but ESXi is saying VM DirectPath I/O is not supported on my hardware. Did you have to change any BIOS settings to get it working? I enabled the IOMMU and Virtualization options but still no luck.

    I’ve seen multiple reports of people using this board with ESXi 5 and successfully using DirectPath I/O but for the life of me I can’t figure out what I’m doing wrong. The only major difference I’ve noticed between our configurations is I used an Athlon II chip rather than a Phenom II though both supposedly support AMD-V.

Write a Comment