Using pyVmomi to create a cluster and a datacenter in vCenter

Have you ever wanted to automate creating a Datacenter or a Cluster in your vCenter using pyVmomi but didnt know where to start? I created a sample in the community samples project that does just that. It is pretty simple to use, and I will demonstrate that now.

First lets run the script with the -h argument to see what options it takes:

± |master ✗| → python make_dc_and_cluster.py -h
usage: make_dc_and_cluster.py [-h] -s HOST [-o PORT] -u USER [-p PASSWORD] -n
                              DCNAME -c CNAME

Standard Arguments for talking to vCenter

optional arguments:
  -h, --help            show this help message and exit
  -s HOST, --host HOST  vSphere service to connect to
  -o PORT, --port PORT  Port to connect on
  -u USER, --user USER  User name to use when connecting to host
  -p PASSWORD, --password PASSWORD
                        Password to use when connecting to host
  -n DCNAME, --dcname DCNAME
                        Name of the Datacenter to create.
  -c CNAME, --cname CNAME
                        Name to give the cluster to be created.

As we can see you will need to supply a HOST name (this can also be an IP), a PORT number (the default is 443), a USER to connect as, and a PASSWORD for that user. For this to work properly your user will need the following permissions in vCenter Host.Inventory.CreateCluster and Datacenter.Create. We also need to provide a DCNAME which is the name of the Datacenter we want to create, and finally a name for the new (empty) Cluster. This sample does the absolute minimum to create these objects in your inventory. If you want to customize any settings you will need to modify the code to provide your own ClusterConfigSpecEx. If one is not provided which is how the sample works it uses an empty spec which creates a Cluster using all default settings. Enough talk, lets run this and setup a new Datacenter and Cluster. Here is a screenshot of my inventory before we start:

vCenter ScreenShot

 |master ✗| → python make_dc_and_cluster.py -s 172.16.214.129 -u 'administrator@vsphere.local' -p password -n 111222333 -c '111222333 - Prod Cluster'

And here we have our new Datacenter and Cluster

vCenter after ScreenShot

using pyvmomi to get a list of all virtual machines — fast

Something that often comes up from various people new to using the vSphere API is how to get information very quickly about all the VirtualMachines in the inventory. There is a sample that comes with pyvmomi called getallvms.py which is an obvious place to start to get this info. When its run on an inventory that only has 30 VirtualMachines it seems pretty fast. It only takes about .5 seconds to complete. Try it on a larger inventory like something with 500+ VirtualMachines and it really starts to slow down going from .5 seconds all the way up to over 6 seconds. This number just keeps growing the larger the inventory gets. Once the inventory reaches over 1000 VirtualMachines it can take over 10 seconds for this info to be returned. In other words this solution just doesnt scale, but its often the only way new comers know. The good news is that VMware provides other ways to get this info. The bad news is that its not an obvious solution, and its kind of complicated to use, but thats what I am here for 🙂

Where I work we have over 45,000 vSphere powered Virtual Machines, and its my job as a Sr. Developer there to make sure our code is stream lined, efficient, and scales the way we do. This is why I use property collectors when I need to work with objects from the vSphere inventory. To help new users I provided a sample I call vminfo_quick which as its name implies get info about a VirtualMachine, quickly. To test this lets run the getallvms.py from above on a vCenter with 576 VirtualMachines and time it.


time python getallvms.py -s 10.12.254.119 -u 'administrator@vsphere.local' -p password

real 0m6.300s
user 0m2.476s
sys 0m0.123s

Almost 6 1/2 seconds. Thats not too bad right? Now lets run the vminfo_quick sample I provided against that same vCenter and see how it does. I included a counter and a timer in this sample so we dont have to run time.


python vminfo_quick.py -s 10.12.254.119 -u 'administrator@vsphere.local' -p password

Found 576 VirtualMachines.
Completion time: 0.368282 seconds.

As you can see using a property collector vastly improves performance. I have tested this on an inventory with 1500 VirtualMachines and it still finishes in just under 1 second. I plan to cover details around what the property collector is and how it works in future posts. Stay tuned!

vCloud Usage Meter

Recently I was tasked with building a tool that would create customers and rules in vCloud Usage Meter to help enhance our monthly usage reports. The way this tool needed to work was to connect to all of our vCenter Servers, pull some data from them then connect to vCloud Usage Meter and create a customer, then make a rule that would tie their inventory to some managed object. Normally I do most of my code work in Groovy using the Grails framework, but for this project I decided to use python. The main reason for that was we needed to run this tool once a month, and they wanted it hands off. I knew our Ops Team would want to create a crontab for it on one of our Linux servers so I wanted to make something very portable. Python was an obvious choice for this based off of that requirement.

When I got started I went looking for some library that would make it easier for me to interact with vCloud Usage Meter but I didnt find anything. vCloud Usage Meter provides a REST based API so I decided to just write my own library and called it thunderhead. I thought it might be useful to others so I went ahead and open sourced the work via Apache-2.0. At the time of this writing it is not 100% feature complete, but does provide a lot of functionality, and I plan to continue development on it to get it 100% complete. I also always appreciate pull requests so if you need something I havent implemented please feel free to send me a pull request. For examples of how to use the library see the tests that are included. To install thunderhead simply

pip install thunderhead

Virtualbox + Vagrant + VCSA + VCSIM To Automate Your vCenter Test Environment

Just The Facts..

Before we even get started I have to say this. What follows in this article is 100% completely unsupported in anyway form or fashion. You should only even consider doing this if you are in a situation like me where you need to do some form of automated testing. Some of these things may even violate the EULA of one or more technologies used. Last but not least: In no way is this something my employer (current or former) endorses doing or had any role in. This mess is all mine.

What Problem Are You Trying To Solve Here?

I am a software developer. I work full time developing applications that interface directly with various VMware (and at times OpenStack) technologies. While working on the VMware stuff I almost always need a vCenter with a complete inventory to test my code with. Thankfully people like William Lam have created wonderful blog posts that have helped me get a lot of what I needed done with out even having to do anything myself except modify a script to meet my needs. This blog post is going to take what I have learned from doing this stuff manually and wrapping it all up into Vagrant using Virtualbox and a couple of command line tools from VMware.

Press The Any Key?

If you have ever tried running the VCSA on Virtualbox you will have seen this message for sure:

    The profile does not allow you to run the products on this system.
    Proceeding to run this installation will leave you in an unsupported state and might impact your compliance requirements.

You will also know how annoying it is to have to press any key to continue during subsequent boots post install. These issues make it very difficult to create automated test environemts that you can spin up and down as need. Thankfully the VCSA is a Linux box so if you are crafty you can do some things to bypass these checks before they ever happen.

What Are All Those Tools in Your Toolbox?

First to keep the messages above from ever showing in the first place we have to modify the system files on the VMDK of the system disk. The first tool Ill be using is one supplied by VMware. Its called vmware-mount, and its part of the vSphere Virtual Disk Development Kit (vddk from here out). For some reason it got
removed from the 5.5 kit but in my testing thus far the 5.1 kit has worked fine on my 5.5 VCSA setup. I wont be discussing how to setup the vddk or the prereqs it needs for it to work. Please use the documentation provided by VMware for that. Next Ill be using Virtualbox, Vagrant, and then some scripts from William Lam to customize the VCSA and configure VCSIM. In my environment I will be using Debian Linux Wheezy 7.5, Virtualbox 4.3.12, Vagrant 1.6.3, and VMware-vCenter-Server-Appliance-5.5.0.10000-1624811 You should download these things and install them before you get started. The VCSA only needs to be downloaded, but Virtualbox and Vagrant, and the vddk will need to be installed and configured. This process may work on other versions but YMMV

Everything Has A Place, And There Is A Place For Everything

Before we start making any changes I found this whole process works best if you import the VCSA OVF into Virtualbox first. To do that start Virtualbox.Next click on the File menu, and select Import appliance. Now navigate to where you downloaded the VCSA files from VMware. Select the OFV file and follow the instructions to complete the process. You do not need to power on nor should you power on the newly created VM. If you skip this step you can end up struggling with your mounted VMDK going read-only as soon you as try to make any changes to files on its file system further along in this process. With that out of the way I am now going to move on to the changes that have to be made.

Change, The Only Constant

With all the prereqs out of the way lets get this party started. We need to start by using vmware-mount to mount the VCSA system VMDK so we can make some changes to bypass those pesky messages I discussed in the begining of this article. I store my Virtualbox files in ~/Virtualbox Vms/vmware/vcsa

    cd ~/Virtualbox Vms/vmware/vcsa
    mkdir /tmp/vcsa
    sudo vmware-mount VMware-vCenter-Server-Appliance-5.5.0.10000-1624811-system.vmdk 3 /tmp/vcsa

Now with the disk mounted we can make some changes to the proper files. There are several files we are interested in. For Vagrant to work happily we need to make some changes to the sshd_config file. Next to stop the pesky messages we get on boot we need to modify a file called boot.compliance. Lets makes these changes now. Lets start with sshd_config. In this file we want to enable password auth, turn off DNS lookups.

    vim /tmp/vcsa/etc/ssh/sshd_config

In this file look for the following lines to edit so they look like the following lines:

    PasswordAuthentication yes
    UseDNS no
    #AllowGroups shellaccess wheel

Just a note on the PasswordAuthentication. It is optional and honestly not used in this process. We will be using keys, I only enable it for some other not discussed in this blog stuff. First off I found this value listed 2 times. The first value was set to yes already, then near the bottom of the file it was set again and was set to no. I simply removed the second entry leaving only the first one. Leaving it to no does not keep ssh password auth from working. The sshd_config documentation can be confusing here because you can set this to no and still get prompted for a user name and password. This is normally from the keyboard-interactive login method (which still uses a password which is where people can get confused). There is a really boring RFC you can read about it here if you need some help getting to sleep some time. UseDNS no tells the sshd server not to do a reverse lookup of people trying to connect. Since this will be running from my own local host on virtualbox there is no need for the DNS lookup to happen. The last line removes the requirement for being in the shellaccess or wheel group. Next lets shut up the warnings about being unsupported, and disable having to press the any key. For that we have to edit that boot.compliance file.

    vim /tmp/vcsa/etc/init.d/boot.compliance

In this file look for the following lines and make them look like this:

    MSG=`/usr/bin/isCompliant -q`
    CODE=0

Here I add the -q which tells the isCompliant script (which is GPL code and looks to be part of the base SUSE install) to run in quiet mode and not output any of the messages it finds when something isnt compliant. Next I hard code CODE=0 because I know that isCompliant will return a non 0 status causing yet another message to pop up on screen. This is the message asking us to “press any key to continue”. Setting this value to 0 makes that message not show and the boot process to continue with out human intervention being required.

Next I need to add the vagrant user. Normally I might use something like adduser/useradd but here I will just make the changes those tools do manually. To do that Ill be editing the passwd and shadow files by hand.

    vim /tmp/vcsa/etc/passwd

Here I will add the following line to the bottom of the file:

    vagrant:x:1519:100::/home/vagrant:/bin/bash

For an explination of what each of the colon seperated sections mean see the passwd(5) Linux man page. Next I edit the shadow file.

    vim /tmp/vcsa/etc/shadow

I add the following info to the file and save it. FYI The shadow file is read only even for root so I have to force the save.

    vagrant:$6$gxhfnuHm$FMUpGdh2kca12joVHFFV33bhJSvxE5xQWAn3RzkQmP1X/KeckBW2ODcYhe2a2y5D3ROUTtMDgu0Djzlpz4E7B/:16271:1:60:7:::

Here I am setting the password to for the vagrant user to vagrant. For a full explaination of what each colon seperated section means see the shadow(5) Linux man page. Now I need to add a home directory and copy in the basic files that come with a home directory. Those files are typically located in /etc/skel and on the VCSA that is no exception.

    mkdir -p /tmp/vcsa/home/vagrant/.ssh
    cp /tmp/vcsa/etc/skel/.{b*,e*,i*,pro*,vim*} /tmp/vcsa/home/vagrant
    chown -R 1519:100 /tmp/vcsa/home/vagrant
    chmod 700 /tmp/vcsa/home/vagrant/.ssh

Now I need to add the vagrant insecure ssh key to the authorized_keys

    wget https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub -O /tmp/vcsa/home/vagrant/.ssh/authorized_keys
    chown 1519:100 /tmp/vcsa/home/vagrant/.ssh/authorized_keys
    chmod 600 /tmp/vcsa/home/vagrant/.ssh/authorized_keys

Finally I need to add the vagrant user to the sudoers file and give them full access with out a password.

    visudo -f /tmp/vcsa/etc/sudoers
    vagrant ALL=(ALL) NOPASSWD: ALL

The Light At The End Of The Tunnel

We are getting close to being finished. Now there are only a few steps left. The next few steps took me some time. Not because they are hard but because finding the packages needed to do them was not easy. Almost everything I am doing is to make it so I can use Vagrant and Virtualbox. After having done all this Im still not 100% sure it was worth all this effort, but I digress. Lets move on to finish this.

The last few steps are:

  1. Unmount the system vmdk we mounted earlier.
  2. Start the VCSA (but do not do anything other than power it on).
  3. Install packages to allow you to build kernel modules
  4. Install VBoxLinuxAdditions into VCSA
  5. Power off VCSA & Export into Vagrant

First I unmount the vmdk using vmware-mount -x, next I power on my VCSA. Once its booted I will log in as root using the default password it has of “vmware”. Next using zypper I install make and gcc:

    zypper --gpg-auto-import-keys ar http://download.opensuse.org/distribution/11.4/repo/oss/ 11.4
    zypper --gpg-auto-import-keys ar http://download.opensuse.org/update/11.4/ 11.4-updates
    zypper install make gcc

When I run this command the system tells me there is a problem and I can pick an option to get past it. I pick option 1 which is downgrade binutils. Once that install finished it warned me about needing to restart but I ignored that because Im only going to install a couple more things then power it off anyway. Here comes the hard part (at least for me it was). Locate and install kernel-source-3.0.101-0.5.1 kernel-default-devel-3.0.101-0.5.1. For me the locating of those packages was the most difficult thing in this whole process. You cant use zypper to install this, at least not using the repos from above. I honestly found these packages on an FTP server in Russia.. Seriously it was not fun tracking them down. Once located download them and install them using:

    rpm -Uvh kernel-*

Now you are finally ready to build the Virtualbox Guest Additions. This will allow you to use port forwarding in Vagrant, as well as shared folders and many other features of Vagrant. To install the guest additions you need to copy the VBoxLinuxAdditions.run up to the VCSA. The file can be found in the VBoxGuestAdditions.iso which will then need to be mounted somewhere so you can get the file off of it you need. I did this by doing:

    mkdir /tmp/guest_add
    mount -o loop /usr/share/virtualbox/VBoxGuestAdditions.iso /tmp/guest_add/

Next I did a scp to place the file on the VCSA. Next simply set the file executable and execute it:

    chmod +x VBoxLinuxAdditions.run
    ./VBoxLinuxAdditions.run --nox11

Once this process finishes you are done-ish.

Wrapping it all up

All that is left now is to power off the VCSA, and export the image into vagrant. I do that by running:

    vagrant package --base VMware_vCenter_Server_Appliance

Where VMware_vCenter_Server_Appliance is the name I gave the image in Virtualbox when I imported it. This process on my laptop which is a Dell with an i-7, 32G of RAM, and an SSD took 27 mins. Once it completes you need to import this into vagrant. I did that by doing this:

    vagrant box add vcsa-5.5 package.box

This process never took more than 5 mins. Next If you use my vagrant files you can almost vagrant up. I say almost because I hit a snag where I couldnt get the vCenter configuration script to run properly with out doing this goofy hack. Once you have my files you can vagrant up like this and it will work:

     vagrant up && ssh -p 2222 -o "StrictHostKeyChecking=no" -i vagrant root@127.0.0.1 < provision/configure_vcsa.sh

Thats it. Now just wait like 20-50 mins for it to completely complete and you can log in to your VCSA at: https://localhost:8443/ (user administrator@vsphere.local password: password this is also where to hit for the API access into vCenter) for the vCenter and the web client is at https://localhost:9443/ (user administrator@vsphere.local password: password) and you can access the config page at https://localhost:5480/ (user root password vmware)

The End

Thanks for going through this. I know it was long. It took me a while to get this all working and I tried to keep this blog to only important bits about the process. Again I must say you are 100% not supported by doing this stuff. First the VCSIM is not supported but furthermore making all these crazy changes like I did to the VCSA really goes beyond the realm of no support. If you have question or comments about this process please feel free to reach out to me. I will be glad to help where I can, but dont read into this as me offering to support you if you do this.. Its a very advanced topic and one that should only be attempted if you are comfortable doing this stuff.