Centos7, Rackspace and kswap0

I’ve just started to test an application from one my clients in CentOS 7. A Python app that runs on Python 2.7 that is the default version in the latest version of CentOS/RHEL. As usual, all the cloud providers don’t configure the swap space by default, so I have to configure Chef to create it when the deploy starts. When I started to test the deployment, I discover that the instance crashed compiling some of the Python modules. Debugging I see a problem that is reported a very frequently if you search in Google: “kswapd0 using all the CPU”.

I switch my tests to Digital Ocean, and there it worked perfectly. After a while, I discovered some differences in /etc/sysctl.conf. Rackspace guys setup vm.swappiness=0,  thing that triggers a bug in kswap0 (there is information about this on the net). I’ve disabled that configuration and the variable took the value of 30, the default in the kernel.

And everything works again…

A little history from the our daily DevOps engineer life .

 

Post to Twitter

Services that needs /vagrant to start

I have a set of recipes that I use to build the environments from the local developer desktop up to production. The background is provided in my last post.

One of the differences between the Vagrant environment and the other ones, is that HTTP apps have their web server DocumentRoot pointing to the /vagrant directory. This directory is available later at boot time, after the web server is started (at least with Apache in CentOS, I don’t remember in Ubuntu right now). The web server doesn’t start because the directory isn’t available.

To fix this, I use Upstart and its event system. Looks at this file (/etc/init/apache-vagrant.conf):

start on vagrant mounted

task

script
service httpd start
end

This script catches the event vagrant-mount, that is triggered when the /vagrant folder is ready to use. In this way, you execute the command to start Apache.

Let me know if you find a better way to do this.

Post to Twitter

Automation from the developer and up

Since last week’s discussion with colleagues and clients about automation, I’ve been talking about a process I commonly use and I think it should be part of DevOps’ processes. When we start to code our infrastructure for a new application, our starting point should be the environment used by the developer and then should finish in production. Let me explain further.

Vagrant has became very popular for managing local development environments via virtual machines. It has a functionality called “provisioning” that uses existing automation tools to install and configure all of the software inside the VM. You can use your preferred tool: Scripting, Chef, Puppet, Ansible, and SaltStack (my preferred tool these days). This gives us the opportunity to create an environment for the developer similar to test, qa, staging and production servers. There will be some differences obviously. The database in the developer desktop will be in the virtual machine itself, but it could be in different servers in the other environments. It’s possible that some security settings will be different too.

The most important part of this idea is that the developer will use the same software, the same versions, and the same configuration that we will use in all the stages up to production. I have a habit of starting to code the automation in Vagrant on my local machine, then giving the results to the developers as the first beta testers. If that works for them, then I start to create all of the servers that will run the application.

DevOps culture involves (among other things) two processes: communication and automation. The last one is clear – we are doing it using these tools – but the first one (the most important one) begins when the developers define the requirements, test their local environment, and provide feedback. We provide an environment with our infrastructure standards and, in exchange, we receive information about issues where the application is being created. If the developer changes something inside the virtual machine, the change should be reported to us so we can adjust our automation recipes. This would get the feedback loop working from the very beginning.

I forgot to mention the collaboration element of this process. It impossible to use this process successfully without it.

This process provides positive results in my experience, reaching production environments with a few issues. All of the issues are discovered within the first few stages of all this process. I hope it helps you as much as it has helped me.

Post to Twitter

My thoughts about SaltStack and why I use it

One year ago I started to use SaltStack. I have experience using Puppet for a long time, but a client was using Salt so I had to learn it. After a year using it, this is why it’s my preferred automation tool:

  • Simple syntax, you define the state of your servers, files, pkgs, whatever using YAML files.
  • You can apply your recipes on demand. You run a command and all the changes are applied at the moment. You don’t need to wait for polling.
  • Integrated remote execution. It’s in the core, you need anything external.
  • Cloud support. Salt cloud it’s part of the core right now. You set your credentials, you tell how many instances you want and salt-cloud will launch them, it will connect with the salt master automatically. Again, you don’t need anything from outside cloud.
  • Flexibility, it’s really flexible. You can code custom modules in Python easily to customize everything according to your needs.
  • Orchestration thanks to overstate. Overstate allows you to apply recipes in your  server in a specific order. For example, database servers first, then app servers.
  • The project is young, but it’s growing and growing. The number of bug reports, features requests and the fixes and enhancements on every version are impressive.

There are more reasons that I’m missing for sure.

You can see an example of something done using Salt here. It’s an example of a MongoDB cluster managed by Mongo. Let me know if you have questions about it.

Post to Twitter

Elastic MongoDB

A few months ago, I had spare time and I wanted to combine my experience in the last year with MongoDB and SaltStack. The result is Elastic MongoDB, an example of orchestration/automation to setup a MongoDB cluster with replication and sharding using Salt. The instructions to launch are in the README, it should be running in a few steps. There is only an issue, you have to run the salt \* state.highstate command three of four times to get everything working. That’s because there are some dependency between the boxes, so the setup should be in some order. By default, Salt applies all recipes  at the same time, generating failures. To fix this I have to use a feature of Salt called overstate, but I haven’t had time to implement it. 

I’ll fix when I have some time, but anyway you can try it. Have look here.

Fell free to ask any question!

Post to Twitter

OpenStack: using one VLAN per project

One of the networking options of OpenStack is to use one VLAN per project/tenant. This allows us to separate projects completely. In the documentation, is not very clear how to configure this. The steps about how to modify the configuration files are fine, but the command lines required to setup the network per project not.

Basically, when you have a VLAN per project you have to create a network and then assign a subnet to it.

This is done with the following two commands:

neutron net-create \
 --tenant-id 62bc0d1979cf4bda82f1cde4ac36543a \
 --provider:physical_network=physnet1 
 --provider:network_type=vlan 
 --provider:segmentation_id=12 \
 demo-net

 demo-net is the name of the network that you are creating. You need to be clear with the network_type and the segmentation_id. The first one is self-explanatory and the second one is the VLAN ID that you want to associate to that network. This is very import, if you don’t specify this, packets will be tagged with default VLAN ID (1).

 

Post to Twitter

Making VM cloning simpler v2

In my last post I wrote about a procedure that I use to help me to locate VM IP address after creation. You can read about the problem here.

The problem with my solution is that you need to take the networking control away from Libvirt, creating the bridge with hand, configuring dnsmasq, etc. So, I looked at the code and I made libvirt do it for me. Here is the commit.

Basically, this adds an option to network config inside libvirt to setup forwarders for dnsmasq. This allows you to use the last one as the nameserver for your host machine (using 127.0.0.1 in your resolv.conf). Every time a VM boots up, it receives an IP address from dnsmasq and gets its FQDN mapped to its IP address (something like a dnsupdate, but this happens internally because the same daemon provides both services).

You have to setup the domain and the forwarders options to get this working.

This new feature is documented and will appear in a future release of libvirt.

Post to Twitter