SREs Needed (Berlin Area)

Gaikai

We are looking for skilled people for SRE / DevOPS work.

So without further ado, here is the job offering :)

SRE / DevOps

Do you want to be part of an engineering team that focus on building solutions that maximizes use of emerging technologies to transform our business to achieve superior value and scalability? Do you want a career opportunity that combines your skills as an engineer and passion for video gaming? Are you fascinated by technologies behind the internet and cloud computing? If so, join us!

As a part of Sony Computer Entertainment, Gaikai is leading the cloud gaming revolution, putting console-quality video games on any device, from TVs to consoles to mobile devices and beyond.

Our SRE's focus is on three things: overall ownership of production, production code quality, and deployments.

The succesfull candidate, will be self-directed and able to participate in the decision making process at various levels.

We expect our SREs to have opinions on the state of our service, and provide critical feedback during various phases of the operational lifecycle. We are engaged throughout the S/W development lifecycle, ensuing the operational readiness and stability of our service.

Requirements

Minimum of 5+ years working experience in Software Development and/or Linux Systems Administration role. Strong interpersonal, written and verbal communication skills. Available to participate in a scheduled on-call rotation.

Skills & Knowledge

Proficient as a Linux Production Systems Engineer, with experience managing large scale Web Services infrastructure. Development experience in one or more of the following programming languages:

  • Python (preferred)
  • Bash, Java, Node.js, C++ or Ruby

In addition, experience with one or more of the following:

  • NoSQL at scale (eg Hadoop, Mongo clusters, and/or sharded Redis)
  • Event Aggregation technologies. (eg. ElasticSearch)
  • Monitoring & Alerting, and Incident Management toolsets
  • Virtual infrastructure (deployment and management) at scale
  • Release Engineering (Package management and distribution at scale)
  • S/W Performance analysis and load testing (QA or SDET experience: a plus)

Location

  • Berlin, Germany

Who is hiring?

  • Gaikai / Sony Interactive Entertainment

When you are on LinkedIn, you can directly go and apply for this job. If you want, but you are not forced to, please refer to me.

Dear OpenStack Foundation

OpenStack why do I need to be an OpenStack Foundation Member when I want to send you a bugfix via PR on GitHub?

I don't wanna work on OpenStack per se, I just want to use one of your little utils from your stack and it doesn't work as expected under a newer version of Python :)

It would be nice, if the barrier to contribute could be lowered.

How to create LXD Containers with Ansible 2.2

Ansible

LXD (Working example from this post you can find on my GitHub Page)

While working with Ansible since a couple of years now and working with LXD as my local test environment I was waiting for a simple solution to create LXD containers (locally and remote) with Ansible from scratch. Not using any helper methods like shell: lxd etc.

So, since Ansible 2.2 we have native LXD support. Furthermore, the Ansible Team actually showed some respect to the Python3 Community, and has implemented Python3 Support.

Preparations

First of all, you need to have the latest Ansible Release, or install it in a Python3 Virtual Environment via pip install ansible.

Create your Ansible directory layout

To make your life later a little bit easier, create your Ansible directory structure and turn it to a Git repository.

user@home: ~> mkdir -p ~/Projects/git.ansible/lxd-containers
user@home: ~> cd ~/Projects/git.ansible/lxd-containers
user@home: ~/Projects/git.ansible/lxd-containers> mkdir -p {inventory,roles,playbooks}

Create your inventory file

Imagine, you want to create 5 new LXD containers. You can create 5 playbooks to do it, or you can be smart, and let Ansible do it for you. Working with inventory files is easy, it's simply a file with an INI file structure.

Let's create an inventory file for new LXD containers in ~/Projects/git.ansible/lxd-containers/inventory/containers:

[local]
localhost

[containers]
blog-01 ansible_connection=lxd
blog-02 ansible_connection=lxd
blog-03 ansible_connection=lxd
blog-04 ansible_connection=lxd
blog-05 ansible_connection=lxd

We defined now 5 containers.

Create a playbook for running Ansible

We need now an Ansible playbook.

A playbook is just a simple YAML file. You can edit this file with your editor of choice. I personally like Sublime Text 3 or GitHubs Atom, but any other editor (like Vim or Emacs) will do.

Create a new file under ~/Projects/git.ansible/lxd-containers/playbooks/lxd_create_containers.yml:

- hosts: localhost
  connection: local
  roles:
    - create_lxd_containers

Let's go shortly through this:

  • hosts: defines: the hosts to run Ansible on. Using it like this means, this playbook runs on your local machine.
  • connection: local: Ansible will use a local connection, like sshing into your local box.
  • roles: ...: is a list of Ansible roles to be used during this playbook.

You could also write all Ansible tasks in this playbook, but as you want to reuse several tasks for certain workloads, it's a better idea to divide them into roles.

Create the the Ansible role

Ansible Roles are being used for separating repeating tasks from the playbooks.

Think about this example: You have a playbook for all your webservers like this:

- hosts: webservers
  tasks:
    - name: apt update
      apt: update_cache=yes

and you have a playbook for all your database servers like this:

- hosts: databases
  tasks:
    - name: apt update
      apt: update_cache=yes

What do you see? Yes, two times the same task, namely "apt update".

To make our lives easier, instead of writing in every playbook a task to update the systems package archive cache, we create an Ansible role.

Ansible Roles do have a special directory structure, I advise to read the good documention over at the Ansible HQ

Let's start with our role for creating LXD containers:

Create the directory structure

user@home: ~> cd ~/Projects/git.ansible/lxd-containers/roles/
user@home: ~/Projects/git.ansible/lxd-containers/roles/> mkdir -p create_lxd_containers/tasks

Now create a new YAML file and name it ~/Projects/git.ansible/lxd-containers/roles/create_lxd_containers/tasks/main.yml with this content:

- name: Create LXD Container
  connection: local
  become: false
  lxd_container:
    name: "{{item}}"
    state: started
    source:
      type: image
      mode: pull
      server: https://cloud-images.ubuntu.com/releases
      protocol: simplestreams
      alias: 16.04/amd64
    profiles: ['default']
    wait_for_ipv4_addresses: true
    timeout: 600
  with_items:
    - "{{groups['containers']}}"

- name: Check if Python2 is installed in container
  delegate_to: "{{item}}"
  raw: dpkg -s python
  register: python_check_is_installed
  failed_when: python_check_is_installed.rc not in [0,1]
  changed_when: false
  with_items:
    - "{{groups['containers']}}"

- name: Install Python2 in container
  delegate_to: "{{item.item}}"
  raw: apt-get update && apt-get install -y python
  when: "{{item.rc == 1}}"
  with_items:
    - "{{python_check_is_installed.results}}"

Let's go through the different tasks

Create the LXD Container

- name: Create LXD Container
  connection: local
  become: false
  lxd_container:
    name: "{{item}}"
    state: started
    source:
      type: image
      mode: pull
      server: https://cloud-images.ubuntu.com/releases
      protocol: simplestreams
      alias: 16.04/amd64
    profiles: ['default']
    wait_for_ipv4_addresses: true
    timeout: 600
  with_items:
    - "{{groups['containers']}}"
  • connection: local: means it's only running on your local machine.
  • become: false: don't use su or sudo to become a superuser.
  • lxd_container: ...: this is the Ansible LXD module definition. Read the documentation about this module here: Ansible LXD Documentation
  • with_items: ...: this is one of the many Ansible loop statements. In this case, we are looping over the Inventory Group 'containers' (which we defined in the inventory file earlier).

The "{{item}}" will be prefilled by the loop from with_items:..., again a hint to read the good documentation of Ansible about loops.

Check if Python2 is installed inside the container

- name: Check if Python2 is installed in container
  delegate_to: "{{item}}"
  raw: dpkg -s python
  register: python_check_is_installed
  failed_when: python_check_is_installed.rc not in [0,1]
  changed_when: false
  with_items:
    - "{{groups['containers']}}"
  • delegate_to:...": this key tells ansible to not use the default connection anymore, but to delegate the connection and the work to the host mentioned in delegate_to.
  • raw:...: This key advises Ansible to use the raw module. Raw means, we don't actually have anything running, no Python for example, which we need for Ansible. So it just using an SSH connection (by default) or for now, it's using a local LXD connection (like lxc exec <container-name> -- <command>). In this case we are executing dpkg -s python, we want to find out of if Python2 is installed.
  • register: ...: during execution of the raw: ... command, Ansible is able to catch the output (stdout, stderr) and the result code of the raw: ... command. register: ... will define a "variable" to store this result. Normally this "variable" is a Python/JSON dictionary for a particular host, but as we are iterating through the 'containers' inventory group, this 'variable' has a results array (which we will use in the next task), where Ansible stores all outputs of all hosts checks. During the task execution but, this 'variable' is still usable as a single result set.
  • failed_when: ...: this will stop the task, if the registered 'variable' is not accessible or the return code is not 0 or 1 (so command returned no success or no real fail, but something else). (more documentation you can find here)
  • changed_when: false: so whenever this tasks runs, it will always change it status, and this would mean Ansible would report one change (i.e. return code changed). To prevent this, we set this to false.(more documentation you can find here)
  • with_items: ...: this is one of the many Ansible loop statements. In this case, we are looping over the Inventory Group 'containers' (which we defined in the inventory file earlier).

The "{{item}}" will be prefilled by the loop from with_items:..., again a hint to read the good documentation of Ansible about loops.

Install Python2 if it is not installed in the container

- name: Install Python2 in container
  delegate_to: "{{item.item}}"
  raw: apt-get update && apt-get install -y python
  when: "{{item.rc == 1}}"
  with_items:
    - "{{python_check_is_installed.results}}"
  • delegate_to:...": this key tells ansible to not use the default connection anymore, but to delegate the connection and the work to the host mentioned in delegate_to.
  • raw:...: This key advises Ansible to use the raw module. Raw means, we don't actually have anything running, no Python for example, which we need for Ansible. So it just using an SSH connection (by default) or for now, it's using a local LXD connection (like lxc exec <container-name> -- <command>). In this case we are executing dpkg -s python, we want to find out of if Python2 is installed.
  • when: ...: this is a conditional. It says, that this task only executes when the codition is met. In this case when the return code equals to 1. This is true when the Python2 install check returned, that Python2 was not installed.
  • with_items: ...: this is one of the many Ansible loop statements. In this case, we are looping over the Inventory Group 'containers' (which we defined in the inventory file earlier).

The "{{item}}" will be prefilled by the loop from with_items:..., again a hint to read the good documentation of Ansible about loops. In this case, we are looping through the result sets of the Python2 install check and the collected results in the 'variable' python_check_is_installed.

Some more informations

In the playbook and in the first task (create LXD containers) we used the a local connection, which means nothing else than Ansible should work on your local workstation. Inside the Inventory INI file there is this key/value pair: ansible_connection=lxd.

So when the two other tasks who were delegated to the created containers, Ansible would normally use an SSH connection attempt (when you remove the ansible_connection=lxd). With this special configuration in the Inventory INI file it won't try to use SSH towards the containers, but the local LXD connection.

Bringing it all together

Let's start Ansible to do the work we want it to do:

~/Projects/git.ansible/lxd-containers > ansible-playbook -i inventory/inventory playbooks/lxd_create_containers.yml

PLAY [localhost] ***************************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [create_lxd_containers : Create LXD Container] ****************************
changed: [localhost] => (item=blog-01)
changed: [localhost] => (item=blog-02)
changed: [localhost] => (item=blog-03)
changed: [localhost] => (item=blog-04)
changed: [localhost] => (item=blog-05)

TASK [create_lxd_containers : Check if Python2 is installed in container] ******
ok: [localhost -> blog-01] => (item=blog-01)
ok: [localhost -> blog-02] => (item=blog-02)
ok: [localhost -> blog-03] => (item=blog-03)
ok: [localhost -> blog-04] => (item=blog-04)
ok: [localhost -> blog-05] => (item=blog-05)

TASK [create_lxd_containers : Install Python2 in container] ********************
changed: [localhost -> blog-01] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-01'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-01', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})
changed: [localhost -> blog-02] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-02'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-02', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})
changed: [localhost -> blog-03] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-03'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-03', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})
changed: [localhost -> blog-04] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-04'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-04', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})
changed: [localhost -> blog-05] => (item={'changed': False, 'stdout': u'', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': u'blog-05'}, '_ansible_item_result': True, 'failed': False, 'item': u'blog-05', 'rc': 1, 'invocation': {'module_name': u'raw', 'module_args': {u'_raw_params': u'dpkg -s python'}}, 'stdout_lines': [], 'failed_when_result': False, 'stderr': u"dpkg-query: package 'python' is not installed and no information is available\nUse dpkg --info (= dpkg-deb --info) to examine archive files,\nand dpkg --contents (= dpkg-deb --contents) to list their contents.\n"})

PLAY RECAP *********************************************************************
localhost                  : ok=4    changed=2    unreachable=0    failed=0   

~/Projects/git.ansible/lxd-containers > lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| blog-01 | RUNNING | 10.139.197.44 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| blog-02 | RUNNING | 10.139.197.10 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| blog-03 | RUNNING | 10.139.197.188 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| blog-04 | RUNNING | 10.139.197.221 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| blog-05 | RUNNING | 10.139.197.237 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+

Awesome, 5 containers created and Python2 installed.

Now it's time do to the real work (like installing your app and testing them)

New Blog

Welcome to my new Blog :)

Long time no written article because I was too busy with work and with my private live.

But there is so much go write, what I did in the past, what I do in the future and whatever is important.

The old blog articles will go in this new blog as well, but there is no direct way to import them, so I have to do that manually, when time permits :)

10 Years of Ubuntu

Ok, eventually I am 2 months early, but I was appointed an Ubuntu Member on 2005-06-15... but I was starting earlier with Ubuntu Packaging...

Anyhow, I already wrote my praise on Google+.

So just to make this public:

Thanks for 15.04 and all the other releases before (especially the LTS ones).

I think during the last 10 Years, Ubuntu made a difference towards the Linux Community,

When I joined this journey, Ubuntu was just another distribution, with a SABDFL who was pumping a lot of money into his free project. I guess it was his private money, and the whole Linux community should be so thankful to this Geek.

Without Marks engagement, I don't think that Linux on the Desktop is so known to the wider public.

Don't get me wrong, we had SuSE, we had Red Hat, we had Debian (and other smaller Distros), but most of the global players today were famous for the involvement on the servers (Well, not SuSE because they were focused on Desktop before they lost track and made the wrong turn [and no I am not saying openSuSE this is a different story)

10 Years ago, actually 10 years and a couple of months, a small group of people were working on an integrated desktop environment, based on GNOME. And they were right to do so. Those people, many of them still are doing their Job at Canonical, were right to invest their time into that.

And look, where are we today! On the Desktop, on the server, in the middle of the cloud and on a freaking Phone!

Who thought about this 10 and half years ago?

Yeah, I know, there were some decisions which were not so Ok for the community, but honestly, even those wrong decisions were needed. Without wrong decisions we don't learn. Errors are there to learn from them, even in a social environment.

To make my point, I think it's important to have one public figure, to bring a project like Ubuntu forward. One person who directs all fame and hate towards him, and especially Mark is one of those figures.

Just see other huge OpenSource Projects, like OpenStack or Hadoop. Great projects, I give them that, but there is no person who drives it. No Person who is making decisions, where the project has to go. That's why OpenStack as stock OpenSource project is not a product. Hadoop, with all its fame, is not a product out of the box.

Too many companies do have a say. That's why, i.e. it's far from practical to install OpenStack from Source and have a running Cloud System. This is wrong, and those Communities, they need someone who has the hat on to say where these Communities are moving forward.

Democracy is good, I know, but in some environments Democracy blocks innovation. Too many people, too many voices, too many wrong directions. Just see the quality of Ubuntu Desktop, pre-installed on Dell Workstations or Laptops? That's how you do it. You concentrate on Quality, and you get your Vendors who will ship your PRODUCT!

Let's see:

  • We have nowadays Ubuntu as Desktop OS (with Unity as Desktop)
  • We have Ubuntu as a Server OS, running on many uncounted bare metal machines.
  • We have Ubuntu as a Cloud OS, running on many, many Amazon instances, Docker instances and eventually Rackspace Instances.

But Ubuntu is more. The foundation of Ubuntu is driving many other Projects, like:

  • Kubuntu (aka the KDE Distro of Choice)
  • Ubuntu GNOME Remix
  • Ubuntu with XFCE, etc.
  • Mint Linux
  • Goobuntu
  • etc.

All those derivatives are based on the Ubuntu Foundation, made and integrated and plumbed by so many smart and awesome people.

Thanks to all of You!

So what now?

Mobile is growing. Mobile first. Mobile is the way to go!

Ubuntu on the Phone is not an idea anymore, it's reality. Well done people. You made it!

But Ubuntu can even do more. Let's think about the next hype.

Hype like CoreOS.

A Linux OS which is image based, no package management, just driven my some small utilities like systemd, fleetd and/or etcd.

CoreOS is one of the projects, I am really looking forward to use. But, I really want to see Ubuntu there.

And yes, there is Ubuntu Snappy....so why not trying to use Snappy as CoreOS replacement?

There is Docker. Docker is being used as the Dev Util for spinning up Instances, with specialised software on it.

Hell, Stephane Graber and his Friends over at the Linux Container Community, they have LXD! LXD driven by Stephane and his friends. Stephane is working for Canonical. So, I say: LXD is a Canonical Project!

And what is Canonical? Canonical is a major contributor to Ubuntu. I want to see LXD as the Docker Replacement, with more security, with more energy, with better integration into Cloud Systems like OpenStack and/or CloudStack!

To make a long story short, Ubuntu is one of those Projects, which are not going away.

Even with Mark (hopefully not) retiring, Canonical will be the driving force. There will be another Mark, and that's why Ubuntu is one of the driving forces in our OpenSource Development. Forget about Contributor Licenses, forget about all decisions which were wrongly made.

We are here! We don't go away! We are Ubuntu, Linux for Human Beings! And we are here to stay, whatever you say! We are better, we are stronger, we are The Borg! ^W ^W ^W ^W forget this, this is a different movie ;)

And if you ask: "Dude, you are saying all this, and you were a member of this Project, where is your CONTRIBUTION!?!?"

My Answer is:

"I bring Ubuntu to the Business! I installed Ubuntu as Server OS in many Companies during the last couple of years.
I integrated Ubuntu as SupportOS in companies where you don't expect it would run and support Operations or Service Reliability Departments.
I am the Ubuntu Integrator and Evangelist you won't see, hear or read (normally). I am the one of the Ubuntu Apostles, who are not bragging,
but bringing the Light to the Darkness"

;-)

PS: Companies Like Netviewer AG, Podio (Both Belong now to Citrix Inc.) and Sony/Gaikai for their PlayStation Now product

Python and JavaScript?

Is it possible to combine the worlds amazing prototyping language (aka Python) with JavaScript?

Yes, it is. Welcome to PyV8!


Prerequisites

So, first we some libraries and modules:

  1. Boost with Python Support

    • On Ubuntu/Debian you just do apt-get install libboost-python-dev, for Fedora/RHEL use your package manager.
    • On MAC OSX:
    • When you are on Homebrew do this:

      brew install boost --with python

  2. PyV8 Module

(You need Subversion installed for this)

    mkdir pyv8
    cd pyv8
    svn co http://pyv8.googlecode.com/svn/trunk/
    cd trunk

When you are on Mac OS X you need to add this first:

    export CXXFLAGS='-std=c++11 -stdlib=libc++ -mmacosx-version-min=10.8'
    export LDFLAGS=-lc++

Now just do this:

`python ./setup.py install`

And wait !

(Some words of advise: When you are installing boost from your OS, make sure you are using the python version which boost was compiled with)
  1. Luck ;)

Means, if this doesn't work, you have to ask Google.

Now, how does it work?

Easy, easy, my friend.

The question is, why should we use JavaScript inside a Python tool?

Well, while doing some crazy stuff with our ElasticSearch cluster, I wrote a small python script to do some nifty parsing and correlation. After not even 30 mins I had a commandline tool, which read in a YAML file, with ES Queries written in YAML Format, and an automated way to query more than one ES cluster.

So, let's say you have a YAML like this:

title:
  name: "Example YAML Query File"
esq:
  hosts:
    es_cluster_1:
      fqdn: "localhost"
      port: 9200
    es_cluster_2:
      fqdn: "localhost"
      port: 10200
    es_cluster3:
      fqdn: "localhost"
      port: 11200_
indices:
  - index:
      id: "all"
      name: "_all"
      all: true
  - index:
      id: "events_for_three_days"
      name: "[events-]YYYY-MM-DD"
      type: "failover"
      days_from_today: 3
  - index:
      id: "events_from_to"
      name: "[events-]YYYY-MM-DD"
      type: "failover"
      interval:
        from: "2014-08-01"
        to: "2014-08-04"
query:
  on_index:
    all:
      filtered:
        filter:
          term:
            code: "INFO"
    events_for_three_days_:
      filtered:
        filter:
          term:
            code: "ERROR"
    events_from_to:
      filtered:
        filter:
          term:
            code: "DEBUG"

No, this is not really what we are doing :) But I think you get the idea.

Now, in this example, we have 3 different ElasticSearch Clusters to search in, and all three have different data, but all are sharing the same Event format. So, my idea was to generate reports of the requested data, but eventually for a single ES Cluster, or correlated over all three. I wanted to have the functionality inside the YAML file, so everybody who is writing such a YAML file can also add some processing code. Well, the result set of an ES search query is a JSON blob, and thanks to elasticsearch.py it will be converted to a Python dictionary.

Huu...so, why don't you use python code inside YAML and eval it inside your Python Script?

Well, when you ever wrote Front/Backend Web Apps, you know it's pretty difficult to write Frontend Python Scripts which are running inside your browser. So, JavaScript here for the rescue. And everybody knows how easy it is, to deal with JSON object structures inside JavaScript. So, why don't we use this knowledge and invite users who are not familiar with Python, to participate?

Now, think about an idea like this:

title:
  name: "Example YAML Query File"
esq:
  hosts:
    es_cluster_1:
      fqdn: "localhost"
      port: 9200
    es_cluster_2:
      fqdn: "localhost"
      port: 10200
    es_cluster3:
      fqdn: "localhost"
      port: 11200_
indices:
  - index:
      id: "all"
      name: "_all"
      all: true
  - index:
      id: "events_for_three_days"
      name: "[events-]YYYY-MM-DD"
      type: "failover"
      days_from_today: 3
  - index:
      id: "events_from_to"
      name: "[events-]YYYY-MM-DD"
      type: "failover"
      interval:
        from: "2014-08-01"
        to: "2014-08-04"
query:
  on_index:
    all:
      filtered:
        filter:
          term:
            code: "INFO"
    events_for_three_days_:
      filtered:
        filter:
          term:
            code: "ERROR"
    events_from_to:
      filtered:
        filter:
          term:
            code: "DEBUG"
processing:
    for:
        report1: |
            function find_in_collection(collection, search_entry) {
                for (entry in collection) {
                    if (search_entry[entry]['msg'] == collection[entry]['msg']) {
                        return collection[entry];
                    }
                }
                return null;
            } 
            function correlate_cluster_1_and_cluster_2(collections) {
                collection_cluster_1 = collections["cluster_1"]["hits"]["hits"];
                collection_cluster_2 = collections["cluster_2"]["hits"]["hits"];
                similar_entries = [];
                for (entry in collection_cluster_1) {
                    similar_entry = null;
                    similar_entry = find_in_collection(collection_cluster_2, collection_cluster_1[entry]);
                    if (similar_entry != null) {
                        similar_entries.push(similar_entry);
                    }
                }
                result = {'similar_entries': similar_entries};
                return(result)
            }
            var result = correlate_cluster_1_and_cluster_2(collections);
            // this will return the data to the python method result 
            result
output:
    reports;
        report1: |
            {% for similar_entry in similiar_entries %}
            {{ similiar_entry.msg }}
            {% endfor %}

(This is not my actual code, I just scribbled it down, so don't lynch me if this fails)

So, actually, I am passing a python dict with all the query resulsets from the ES clusters (defined at the top of the YAML file) towards a PyV8 Context Object, can access those collections inside my JavaScript and return a JavaScript HASH / Object. In the end, after JavaScript Processing, there could be a Jinja Template inside the YAML file, and we can pass the JavaScript results into this template, for printing a nice report. There are many things you can do with this.

So, let's see it in python code:

# -*- coding: utf-8 -*-
# This will be a short form of this,
# so don't expect that this code will do the reading and validation
# of the YAML file

from elasticsearch import Elasticsearch
import PyV8
from jinja2 import Template

class JSCollections(PyV8.JSClass):
    def __init__(self, *args, **kwargs):
        super(JSCollections, self).__init__()
        self.collections = {}
        if 'collections' in kwargs:
            self.collections=kwargs['collections']

    def write(self, val):
        print(val)

if __name__ == '__main__':
    es_cluster_1 = Elasticsearch({"host":"localhost", port: 9200})
    es_cluster_2 = Elasticsearch({"host":"localhost", port: 10200})
    collections = {}
    collections['cluster_1] = es_cluster_1.search(index="_all", body={"query": { "filtered": {"filter": {"term": {"code": "DEBUG"}}}}}, size=100)
    collections['cluster_2] = es_cluster_2.search(index="_all", body={"query": { "filtered": {"filter": {"term": {"code": "DEBUG"}}}}}, size=100)
    js_ctx = PyV8.JSContext(JSCollection(collections=collections))
    js_ctx.enter()
    #
    # here comes the javascript code
    #
    process_result = js_ctx.eval("""
            function find_in_collection(collection, search_entry) {
                for (entry in collection) {
                    if (search_entry[entry]['msg'] == collection[entry]['msg']) {
                        return collection[entry];
                    }
                }
                return null;
            } 
            function correlate_cluster_1_and_cluster_2(collections) {
                collection_cluster_1 = collections["cluster_1"]["hits"]["hits"];
                collection_cluster_2 = collections["cluster_2"]["hits"]["hits"];
                similar_entries = [];
                for (entry in collection_cluster_1) {
                    similar_entry = null;
                    similar_entry = find_in_collection(collection_cluster_2, collection_cluster_1[entry]);
                    if (similar_entry != null) {
                        similar_entries.push(similar_entry);
                    }
                }
                result = {'similar_entries': similar_entries};
                return(result)
            }
            var result = correlate_cluster_1_and_cluster_2(collections);
            // this will return the data to the python method result 
            result
    """)
    # back to python
    print("RAW Process Result".format(process_result))
    # create a jinja2 template and print it with the results from javascript processing
    template = Template("""
        {% for similar_entry in similiar_entries %}
        {{ similiar_entry.msg }}
        {% endfor %}
    """)
    print(template.render(process_result))

Again, just wrote it down, not the actual code, so dunno if it really works.

But still, this is pretty simple.

You can even use JavaScript Events, or JS debuggers, or create your own Server Side Browsers. You can find those examples in the demos directory of the PyV8 Source Tree.

So, this was all a 30 mins Prove of Concept, and last night I refactored the code and this morning, I thought, well, let's write a real library for this. So, eventually there is some code on github over the weekend. I'll let you know.

Oh, before I forget, the idea of writing all this in a YAML file came from work with Junipers JunOS PyEZ Library, which has a similar way. But they are using the YAML file as description for autogenerated Python Classes. Very Nifty.

Thanks Jono

Thanks, Jono, for being this awesome Community Manager of Canonical/Ubuntu.

EoM

Dealing with Disrespect - a Review

Review

"Dealing with Disrespect"

by Jono Bacon

First of all a full disclosure:

The Author, Jono Bacon, is a long standing colleague of mine, while working on the Ubuntu project. I am not, in any way, affiliated with his employer (Canonical), and sometimes (not all the times) I really don't share his views and/or opinions.

Personal, I see him as a friend, not a close one, but more like 'Brothers in Arms'. We share the passion of OpenSource and we do like Ubuntu OS, Heavy Metal and Pints of Beer. And especially we like to be a Dad of the most adorable and awesome Sons, we ever wished for.

I owe him a lot, because he (and some other community members, but he in particular) pulled me back into the Ubuntu Business a couple of years ago, and I am very thankful for this.

When Jono revealed his new writing 2 days ago, I started directly to read it, because, believe me or not, I was wondering if he was refering to me to some extend, because I can be exact the same guy who he pictures in his latest book. The disrespectful, the ranting and rambling guy, the angry 'OpenSource' guy, who sits too many hours per day in front of the computer, and reads a lot of nonsense from people who think they are the smartest guys on this planet.

Someone, who is passionate, angry and full of ramblings when it comes to some positions in our technical world, and sometimes speaks up, too loud.

Thankfully, he chose other examples, but I found myself in his book, which is not really charming.

Well, honestly, Jono hit 'Bulls Eye' with his detailed description, between the various aspects of how to read the different comments, responses or posts in our technical world.

His statement

"The trick here is to determine the attributes of the sender and the context."
(PDF, Page 8, 'Dealing with Disrespect')

is the essential message (he extends this later to the four important 'ingredients' sender, content, tone, context).

Old Internet people like me, who still know the 'UseNet', we know how hard this can be. How many times, we read UseNet Posts, which were in our eyes and ears unacceptable, bollocks or insane, and we hit the 'Reply' button in our Newsreader and flamed this poor guy, we didn't even know personally.

In these days, we never thought about the other guy, we just flamed, we insulted on a very personal level, but, believe me or not, it also came back, like a boomerang, and it really escalated. But these were those days, we all had leather as skin, and we could swallow a lot.

Today, world has changed, especially we don't use the UseNet so often anymore, and our 'ramblings' can be found on Weblogs and in the 'Comment' section of those or on Web-Forums. What and how we are saying, writing, commenting nowadays is more publicly exposed than 20 years back. The people got softer, we are trying to be more friendly to each other, we are using mostly a conjugation of the word 'Good', even to say, that something was really bad.

What was missing all the time, was a guide, on how to deal with those, who are not 'nice', who are not socially well conditioned, people who don't speak the political correct english/language of choice.

Until now.

Now, Jono wrote exactly this missing guide. On how to deal with those people. And Jono just didn't write about it, he has the experience, working as 'The Community Manager' of Ubuntu. He already dealt with those. He knows what he is/was writing about

And he knows, that not all of these people are anti-social, hateful or disrespectful.

Many of those people are smart, and in real life really friendly people. It just needs some experience to deal with them, and Jono gave us now the right guide to learn from it.

I really beg you, to read this little guide of Jono, because you can learn from it. If you are Community Manager, or you have to deal with a very loud community, or even when you are the rambling guy. It's worth a read. A lot to learn and to understand.

This book finally tries to solve issues, which can't be fixed technically.

And thanks to Jono, I hope it will make the technical messsed up world a little more enjoyable.

Thanks for Ubuntu 14.04 LTS

So at last it's here. Ubuntu 14.04 LTS

And I have to say 'Thank you' for pushing this out.

I am running Trusty Tahr for a long time now, while it was still in development on my workstation. And it's one of the best releases so far.

Even during development only some glitches were encountered, but were easily workarounded, and this is actually pretty amazing.

When you followed Ubuntu for some years now (and to some extend also invovled in pushing software to it), you know that this wasn't always the case.

We had a couple of really serious hickups, but this release was very handsome. I think Canonicals push towards automated QA and the upload pocket behaviour change were the right things to do.

Thanks Guys, for delivering this amazing release. You really can celebrate and drink a lot of booze and have a good meal (well, now that Jono is the definitive Ubuntu Smoker King, he could serve some delicious pulled pork or whatever he is able to smoke ;))

Again, thank you, you all know who you are. You guys are amazing. Rock On!

Network Engineers, we are looking for you!

PlayStation now

So, we have a Datacenter Engineer Position open, and also a Network Engineer Position.

And as pre-requisite, you should be able to travel through Europe without any issues, you should read/write/speak English, next to your native language.

When you

  • are comfortable to travel
  • are familiar with routers and switches of different vendors
  • know that bonding slaves don't need a safe word
  • know that BGP is no medical condition
  • know how to crimp CAT 5/6/7
  • know the differences between the different types of LWL cable connections
  • have fun working with the smartest guys in this business
  • want to even learn something new
  • love games
  • love streaming
  • love PlayStation (well, this is not a must)

Still with me?

You will work out of our Berlin Office, which is in the Heart of Berlin.

You will work directly with our Southern California Based Network Engineering Team, with our Datacenter Team and with our SRE Team.

The Berlin team is a team of several nationalities, which combines the awesomeness of Spanish, Italian, French and German Minds. We all love good food and drinks, good jokes, awesome movies, and we all love to work in the hottest datacenter environments ever.

Is this something for you?

If so, you should apply now.

And applying for this job is easy as provision a Cisco Nexus router today.

Two ways:

  1. You point your browser to our LinkedIn Page and press 'Apply Now. (Please refer to me, and where you read this post)
  2. Or you send your CV directly through the usual channels to me (PDF or ASCII with a Profile Picture attached) and I put you on top of the stack.

Hope to see you soon and welcome you as part of our Sony/Gaikai Family in Berlin

I know some people are afraid of LinkedIn so here is the official job description from our HR Department.

Job Description:

As a Network Engineer with deployment focus you will be responsible for rollout logistics, network deployment process and execution. You will work closely with remote Network Engineers and Datacenter Operations to turn up, configure, test and deliver Network platforms across POPs and Datacenters.

Principle Duties / Responsibilities:
  • Responsible for rollout logistics and coordination
  • Responsible for network deployment processes
  • Responsible for network deployment execution
  • Deployment and provisioning of Transport, Routing and Switching platforms
Required Knowledge / Skills:
  • Comfortable with travel
  • Comfortable with optical transport, DWDM
  • Comfortable with various network operating systems
  • Comfortable with some network testing equipment
  • Comfortable with structured cabling
  • Comfortable with interface and chassis diagnostics
  • Comfortable with basic power estimation and calculation

Desired Skills and Experience

Requirements:
  • BA degree or equivalent experience
  • 1-3 years working in a production datacenter environment
  • Experience with asset management and reporting
  • Knowledge of various vendor RMA processes to deal with repairs and returns

  • Keen understanding of data center operations, maintenance and technical requirements including replacement of components such as hard drives, RAM, CPUs, motherboards and power supplies.

  • Understanding of the importance of Change Management in an online production environment
  • High energy and an intense desire to positively impact the business
  • Ability to rack equipment up to 50 lbs unassisted

  • High aptitude for technology

  • Highly refined organizational skills
  • Strong interpersonal and communication skills and the ability to work collaboratively
  • Ability to manage multiple tasks at one time

Up to 50% travel required with this position.