(DC)²: Going forward to a 1.0 Release

It was a bit blog silence about me during the last 2 months, but I was really busy with some projects @work.

Today, I would like to write some bits and pieces about the "DataCenter Deployment Control" Project aka (DC)².
In my last article you could see (DC)² in action, deploying some virtual machines on a Xen Server (or on VMWare or on BareMetal or on every device which is able to do some PXE).

At this time, (DC)² was using as backend the Django Framework and as RPC module the fantastic RPC4Django. As database engine was MySQL in use. We used as tftp server the very good working tftpy and as PXE bootloader PXELinux from Syslinux. As frontend development framework I used the "Qooxdoo"  JavaScript framework.

Now, I was improving all of this.

The Backend

First of all, I replaced Django and RPC4Django with web.py and a self developed XMLRPC and JSON-RPC module. With less overhead all RPC calls are much faster now.
Furthermore, I revisited the whole RPC namespace and refactored most of it.

Another important change was to go away from the relational database (MySQL), as this was introducing more complexity to the project.
When I started to think about moving away from the relational model to a document oriented model, I was  giving first CouchDB a try. But CouchDB wasn't the best candidate for this, so I had a look at MongoDB.

And MongoDB it is.

So, with MongoDB and PyMongo you can work without special table models, but if you want you are able to implement a relational db style, which was needed from some workflows in my case.
Furthermore, the replication and sharding functionality of MongoDB was exactly what I was looking for. Easy to setup and configure.

And MongoDB gives you JSON output, or when you work with PyMongo native dictionary types, which was important to me, because one feature I wanted for (DC)² that its documents can be easily improved.

Example:

We do auto inventory for servers. That means, I needed some infos from the servers which are unique.
I defined my server document like that:


SERVER_RECORD = {
    "uuid":True,
    "serial_no":True,
    "product_name":False,
    "manufacturer":False,
    "location":False,
    "asset_tags":False 
}


Reading this, we just need a server UUID (which you can normally find under /sys/class/dmi/id/product_uuid , if this displays 00000 or something else then a uuid nowadays, you should stone your hardware manufacturer) and a serial number (/sys/class/dmi/id/product_serial).
These informations are needed to identify a server. Any other infos are not necessary (during the inventory job I try to get those infos, but actually they are just not that important).

But this record is not complete. Some server admins do need more informations like "how many CPU cores does the server has?" or "How much memory does the server has?" If you want to add this information you just add them to the inventory job (how you do it, is a topic for another article). But you just push the record with the needed fields and your added fields just to the same RPC call, and (DC)² will just save it to MongoDB.

And this is possible all over the system. I defined some informations which are needed for the standard work, which is really enough to help you deploy servers and help you with the bookkeeping, but you can add as much informations you need on top. Without changing one bit of backend code.

The Middleware

Well, (DC)² is mostly bookkeeping and configuration management and helping you to control your server fleet.
The deployment itself is done by FAI - Fully Automatic Installation. Which is an easy tool to deploy Debian Based Distros as well as RPM Based Distros like RHEL, CentOS, SLES, Fedora, Scientific Linux etc.

So, how does it interact with (DC)²?

As said before, the backend is speaking XMLRPC and JSON-RPC. The JSON-RPC part is for the frontend, the XMLRPC part is for the middleware and all tools needing data from (DC)².

The PXE Booting is also improved. Instead of using TFTP for loading the kernel and initrd I switched from the old pxelinux to the new gpxelinux (included in syslinux 4.02).
GPXElinux needs only tftp for loading the gpxelinux.0 file, all other files are being downloaded via HTTP protocol.
This gives you a wonderful possibility to cheaply scale your deployment infrastructure.


The Frontend

The frontend changed not so dramatically as the backend, but good things are still be found.
First of all, I put most of the code into separate modules. So, right now, there are modules for the models, which are used for JSON-RPC calls and pushing back the data to the widgets.
There is a module for all globally used widgets. You'll see that there is one widget which is mostly used. It's called "TableWidget" and has mostly all functionality in it.

But you put any widget you need into the tab view.

You see that the webfrontend is just looking like a desktop application. Which was indeed the purpose of using Qooxdoo and no "HTML add on framework" like  Dojo or YUI. I needed a real developers framework, and Qooxdoo is really one of the best. You can code like Nokias Qt and it's following the OOP paradigma most of the time.

And even for me, someone who had no clue about Javascript it was easy to learn and adapt.

To show you how easy it is, to add a new tab with a tablewidet, here is the javascript code of the Servers Tab.


Code Example:

_showInventoryServers:function(e) {            
if ("inventory_server" in this._tabPages) {
this._tabView.setSelection([this._tabPages["inventory_server"]]);
} else {
var server_tbl=new dc2.models.Servers(dc2.helpers.BrowserCheck.RPCUrl(false));
var server_search_dlg=new dc2.dialogs.search.Servers();
var server_edit_dialog=new dc2.dialogs.EditServer();
var server_table_options={
enableAddEntry:false,
enableEditEntry:true,
enableDeleteEntry:true,
enableReloadEntry:true,
editDialog:server_edit_dialog,
searchFunctions: {
searchDialog:server_search_dlg
},
tableModel:server_tbl,
columnVisibilityButton:false,
columnVisibility:[
{
column:0,
visible:false
}
]
};
var server_table=new dc2.widgets.TableWidget(server_table_options);
this._addTabPage("inventory_server",'Servers',server_table);
server_table.showData();
}
},

A closer look to the other code you can have on the DC² code browsing page on Launchpad. You'll find all the code on Launchpad.
The current frontend version of (DC)² is using Qooxdoo Version 1.5.


New Features

CS²

As you can see on the screenshots, there is a another menu entry with the name "CS²".




This (CS)²  means "Centralized SSL CA System" and helps you to manage your SSL host keys, CSRs, Certs and CRLs. Mostly used in the deployment system for Puppet or FreeIPA or whatever tools you are using which are in need of SSL Authorization.
(CS)² can be integrated in (DC)² but is also usable as standalone application. It has, equally to (DC)², a XMLRPC and JSON-RPC backend, has a qooxdoo frontend and is completly written in Python. Check out the screenshots.


RPM Based Distributions

Thanks to the work of the great Michael Goetze FAI is able to install RPM Based Distros like CentOS or Scientific Linux. I converted the CentOS Deployment to RHEL 5 and RHEL 6, so now, you are able to deploy mostly all world wide used  RPM based Distributions with FAI.
Thanks to Thomas Lange, who is the new maintainer of Rinse, who added my patch to it. 



What's still going to come?


I'm working on a Xen Management Center for (DC)², so you can provision Xen VMs (HVMs/PVs) in one tool without using any other tool.

This is a bit tricky, but it's coming along.
This module will also be available as integration into (DC)²  and as standalone application.
You will also have an XMLRPC and JSON-RPC Backend.
Eventually (this is not set) this RPC backend will also handle VMWare ESX server provisioning. We'll see.

Quick tip for installing Ubuntu as Paravirtualized Guest on XenServer via PXE Boot

Most of the time, when you are using your Amazone Cloud instances, you are working on XenSever.
Most of the time, all your Ubuntu instances are paravirtualized (PV) and not fully hardware virtualized like the Windows instances (HVM).

Well, let's imagine you have your own XenServer and you want to install Ubuntu with your already in place deployment solution, which is using the standard PXE/TFTP way...(Ubuntu is just an example, actually it works for mostly all Linux Distros which are able to be deployed via network).

The first question you need to ask, what's the difference between PV and HVM machines.
To answer that, you just have to have a look on the Xen Wiki:

Quote from http://wiki.xensource.com/xenwiki/XenOverview:

"
Xen supported virtualization types

Xen supports running two different types of guests. Xen guests are often called as domUs (unprivileged domains). Both guest types (PV, HVM) can be used at the same time on a single Xen system.

Xen Paravirtualization (PV)

Paravirtualization is an efficient and lightweight virtualization technique introduced by Xen, later adopted also by other virtualization solutions. Paravirtualization doesn't require virtualization extensions from the host CPU. However paravirtualized guests require special kernel that is ported to run natively on Xen, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. Xen PV guest kernels exist for Linux, NetBSD, FreeBSD, OpenSolaris and Novell Netware operating systems.
PV guests don't have any kind of virtual emulated hardware, but graphical console is still possible using guest pvfb (paravirtual framebuffer). PV guest graphical console can be viewed using VNC client, or Redhat's virt-viewer. There's a separate VNC server in dom0 for each guest's PVFB.
Upstream kernel.org Linux kernels since Linux 2.6.24 include Xen PV guest (domU) support based on the Linux pvops framework, so every upstream Linux kernel can be automatically used as Xen PV guest kernel without any additional patches or modifications.
See XenParavirtOps wiki page for more information about Linux pvops Xen support.

Xen Full virtualization (HVM)

Fully virtualized aka HVM (Hardware Virtual Machine) guests require CPU virtualization extensions from the host CPU (Intel VT, AMD-V). Xen uses modified version of Qemu to emulate full PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc for HVM guests. CPU virtualization extensions are used to boost performance of the emulation. Fully virtualized guests don't require special kernel, so for example Windows operating systems can be used as Xen HVM guest. Fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.
To boost performance fully virtualized HVM guests can use special paravirtual device drivers to bypass the emulation for disk and network IO. Xen Windows HVM guests can use the opensource GPLPV drivers. See XenLinuxPVonHVMdrivers wiki page for more information about Xen PV-on-HVM drivers for Linux HVM guests.
"

So, using a naïve approach, the difference is that a HVM machine "simulates" a real hardware server, while a PV machine is using the hardware resources from the XenServer Host.
A HVM machine provides a bios, the PV machine does not. I don't want to go into other details and the description is far away from the truth, but it helps to see the difference.

Well, now we are coming to the problems, how can you do a PXE install on a PV machine, when the PV does not provide a boot bios or whatever it needs to do the initial boot request?

There are some howtos how to deploy a Linux OS on a PV machine on XenServer via PXE (e.g. XEN PXE Boot Howto by Zhigang Wang) but they go too far. It can be easier.

Having your template for a PV machine on your XenServer, and you provision one PV machine from this template, we can start with the experiment.

You can see from your Xen console, during bootup you don't see any bios message, or PXE boot message, as you would see on a normal HVM machine.
But, when you check in your XenCenter under VM -> Start/Shutdown Menu, you see one Entry under the Reboot Entry. It's labeled: "Start in Recovery Mode".

When your machine is stopped, and you click on this menu item, the machine boots with a bios or better to say, it boots with a PXE bootloader and does everything as a HVM machine.
What? You provisioned a PV machine, and now you have a HVM?

Right, that's all to it. When you stop the machine now, it goes back to the normal PV state. How cool is that?

But, what is the magic behind this special "Recovery Mode"?

Honestly, it took me some time, to find the solution.

What I did to find out more about this, I dug into XenServer XMLRPC API to get some more detailed informations about the VMs.

The devs of XenServer are really cool, they provide an XMLRPC API Server and they also provide a Python XMLRPC API Wrapper.
(I don't go into details about all the methods and calls, you should read the XenServer XMLRPC API Documentation and also the Python Examples, you can also download the XenApi.py module from there)

Let's do some easy hacking:

First, get your python XENAPI source and start connecting to your XenServer:

#!/usr/bin/python
from XenAPI import Session
if name=="main":
   s=Session("https://your.xenserver.domain/")
   s.login_with_password("username","password")

Now you are connected and authenticated.
To make same things easier, you need to write down your machines title/label. Let's imagine, our PV machine is named "PV-Test".

To get the informations we need we need to get first the VM record from the XenServer:

vm=s.xenapi.VM.get_by_name_label("PV-Test")
vm_rec=s.xenapi.VM.get_record(vm[0])

Now we have actually the whole description of this VM in our "vm_rec" variable.
The type is a dict, so it's easy to iterate through it and get all the informations we need:

for i in vm_rec.keys():
    print "%s => %s" % (i, vm_rec[i])
The important infos we need are the following keys:

  • PV_args
  • PV_bootloader => pygrub
  • PV_ramdisk
  • PV_kernel
  • PV_bootloader_args
  • PV_legacy_args
  • HVM_boot_params
  • HVM_boot_polic
On my test machine the values are like this:

  • PV_args => 
  • PV_bootloader => pygrub
  • PV_ramdisk => 
  • PV_kernel => 
  • PV_bootloader_args => 
  • PV_legacy_args => 
  • HVM_boot_params => {}
  • HVM_boot_policy => 
the PV_bootloader => pygrub tells us, that Xen will use a dedicated menu.lst from your machines /boot/grub (grub-legacy format, not grub-pc)
This is the default way of booting your Ubuntu instances on Amazon today. 

To simulate now the Recovery Mode programatically, you need to switch from PV pygrub boot method to HVM boot method. And thanks to some Magic Of Xen, or better what I realized is, that HVM boot methods are always first, before PV boot methods.

To enable HVM network boot from your python tool, you just have to do this:

s.xenapi.VM.set_HVM_boot_policy(vm_rec,"BIOS order")
s.xenapi.VM.set_HVM_boot_params(vm_rec,{'order':'n'})
When you start now your machine, you will see it boots via PXE.

To switch back to your normal PV boot method, you just empty those settings:

s.xenapi.VM.set_HVM_boot_policy(vm_rec,"")
s.xenapi.VM.set_HVM_boot_params(vm_rec,{})
Now you successfully simulated the Recovery Boot of your XenCenter.

But hey, there are some things to know:

All Releases of Ubuntu who are using UUIDs in FSTAB for your disks, are easily to deploy. During installation in HVM mode, you will see your normal disk names like /dev/sda etc.
After switching back to PV mode, you don't have /dev/sda etc anymore, but other device names, but this is no problem for your Ubuntu install, because it can map the UUIDs to your new device names. No Problems here. But make sure you have your "grub-legacy-ec2" package installed, I think I'll ask for a rename, of this package, because it's not ec2 specific, but Xen pygrub specific.

Other Linux Distros, which don't use UUIDs for device mounting will have problems here. You need to rewrite your fstab to use the new device names.

But it's good to know, that you can use your PXE deployment solution to deploy better performing PV machines on your XenServer without changing one thing.

DC² in Action for Xen Server Virtual Machines

Yesterday I found the time, to show on a video how DC² actually works and what it is doing in the background (behind all that hype javascript stuff).

This video shows how to create a Xen Server virtual machine for PXE boot and what is happening when you are done with it.

There are some annotations. And don't be angry because most of the video is recorded from a VNC output of my virtualboxed Windows Workstation. It's recorded with recordmydesktop.

Everything else is running on Ubuntu 11.04.



You can't see the embedded video?

Please go directly to blib.tv to see the video.
Or download the original source from blib.tv.
Or use the blib.tv SD source.



Back in town

Since Tuesday my family and I are back in Germany, after a 4 weeks holiday in Cameroon.

The holiday was fantastic, meeting mostly all members of my family over there, and even having some adventure of having two times a shotgun pointing to my head.

Anyways, the country of Cameroon is a great place to be and the People they are great, welcoming and heartwarming.

The food is delicious. The drinks are wonderful.

But the best thing happened was coming back home and coming back to the office.

My colleagues were using my absence to work on an trailer for our team.




(Can't see the video, just go to the blog directly or go to http://www.youtube.com/watch?v=v5eZoojARb4)


Logos and Names are mostly trademarked by the companies, DC² is GPLed software written by me.
The music? I don't know...please don't shoot us :)

Even my daddy in law wears Ubuntu

As you can see even my daddy in law wears Ubuntu

Short holiday in Belgium

Just a week before we are starting to our Cameroon Trip we are visiting relatives in Belgium
A good start into a long holiday season :-)



OPS@Teammeeting + Remote Meeting with US colleagues (UTC-8

While the others (Sorry, Colleagues @ California) do have to sit in a room....


The K-OPS Team @sadigs place, look at the IPad..the Remote Meeting was already started :)

Yummy

More yummy
















Happy Happy People

Unity and 2x 24


Unity on 2x 24" monitors...


looks nice...

Going to Cameroon, Visiting Family

Now we finally made it, our trip to Cameroon is set
and we finally got the tickets, just waiting for the Visa.

So, as I'm going to Cameroon it would be a good idea to meet up with some folks of the Cameroon Ubuntu Loco team.

My Family and I will be somewhere around the City of Bamenda and Njinibi Area, and hopefully we are able to catch some prepaid airtime cards for our androids, I would like to hear from you, eventually you are also in this area.

Just give me a ping on IRC or email me or catch up with me on Jabber/Google Talk/xmpp, you can find the details on https://launchpad.net/~shermann . You can even catch up with me on Facebook (Facebook User ubuntuworker)

If you are in need of a GPG signature, please bring your passport or any other identification with you, your GPG key id and your fingerprint (printed on a piece of paper) I'll do the same.

Eventually we will also find the time to have a drink or two ;) (Cameroonian Guinness or Castell is great :))

I'm happy to hear from you, get in touch.

Facebook - Evolution of Google?

Facebook announced opencompute.org.

And is it just me, or is Facebook copying Google or is it more an evolution of Googles ideas?