Tag Archives: workstation

Exporting VMware Workstation VMDK to ESXi

In the quest to find the most difficult and convoluted way of doing things.  I feel I really outdid myself this week.  What should have been an easy task of creating an OVF on Vmware Workstation 8 and deploying it into a vSphere environment turned out to be much more.

The night before I created an OVF of a Workstation VM.  The next morning  I went to copy the files to a USB stick prior to work.  Still half asleep I only copied the compressed VMDK file and not the actual OVF or MF files.  Imagine my surprise when I got into work with only a single compressed VMDK file.

No worries, I’ll just upload it to a datastore and attach it.  Yeah no.  You need the VMDK descriptor file that points to the -flat.vmdk file.  vCenter won’t find any disks to attached to a VM.

So after some cursing, hopeful optimism, then some more cursing I found a workable solution.

Virtual Disk Development Kit 5.1 is a collect of tools to help you work with virtual disks.  Inside this kit is a util called vmware-vdiskmanager.exe.  The util actually comes with VMware Workstation but not having it with me required the download of this devel kit to acquire the util.  The util is similar to the ESXi shell command vmkfstools but can be run on a Windows box.  It has the ability to expand / convert a VMDK file out to an ESX-type virtual disk.

The command  I ran was

vmware-vdiskmanager.exe -r <original_source.vmdk> -t 4 <new_destination.vmdk>

Once complete two files are created.  A .vmdk descriptor file and a -flat.vmdk thick virtual disk.

I was now able to upload the two files to a datastore.  You have to upload both of them to the same location.  You won’t see the -flat.vmdk file once uploaded but it will be there.

exporting_vmdk01

Then I went into the Edit Settings of my pre-created vSphere VM.  I clicked on New Device and selected Existing Hard Disk and Add.

exporting_vmdk02

vmware-vdiskmanager.exe is quite a useful little util to help move Workstation VMDK files over to ESXi.  Prior to Workstation 8 this is probably your best way to get a VM’s virtual disk into ESXi.  With the introduction of Workstation 8 and above you can now Export to OVF.  This is no doubt the recommended way but if you just want an individual disk vmware-vdiskmanager is a decent option.

Reference:

VDDK Documentation

Download VMware vSphere 5.1 Virtual Disk Development Kit (login required)

Running nested ESXi –the real inception

Nested ESXi servers are essentially virtualised ESXi servers.  They can be quiet fun to play with.  Usually the only reason you’d do this is for a test lab.  You don’t need a physical ESXi server to nest virtual ESXi hypervisors.  You can choose your product of choice.

In my case I’m running VMware Workstation 9 and virtualised ESXi 5.1 as the second level and then running virtual guests as a third level within the virtual ESXi.  In Workstation 9 you have the ability to connect to ESXi hosts and view guest VMs --as below.

You don’t need to do anything special to virtualize ESXi.  In fact there’s an option to select ESX as the guest OS in VMware Workstation 9.

When you turn on a virtual ESXi guest (or is it a host???) a message appears in the bottom right corner of the console.  It’s pretty self-explanatory.  You can create a 32-bit guest VM only on this ESXi server.

The cool cats that VMware are, allow you to run 64-bit VM guests too with a small config change.  The feature is only available in ESXi 5 and above.

On the virtualised ESXi guest enable console access (or SSH) and login as root.  The file ‘config’ in /etc/vmware needs to be modified and an additional line add, ‘vhv.allow = “TRUE”’.

If you’re a VI fan go ahead and use that.  If you’re lazy like me you can just append it to the file with echo.


echo 'vhv.allow = "TRUE"' >> /etc/vmware/config

Once the change is made no reboot is necassary.  You will now be able to run 64-bit guest VMs within your already virtualised ESXi host.  In an extreme example I ran another ESXi guest at the third level to prove a 64-bit OS would run.  On a second virtualised ESXi guest I’m just running a 32-bit Linux VM at that same third level.

For my test lab three levels is as deep as I need to go.  I’ve read that it’s also as deep as you can go as guest VMs will fail to run at a fourth level.  I don’t have the resources to test this out in my lab.  I heard that there was a VMworld presentation on multiple level nesting ESXi servers.  Would certainly be worth finding.

 

Appendix

As with many things VMware, nested ESXi is not officially supported.
VMware Link:  Support for running ESX/ESXi as a nested virtualization solution

I had trouble with the vmxnet3 network drivers with the nested ESXi.  Instead I chose to go with E1000 particular for that third level.