Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

October 21, 2014

Chris Siebenmann

Some numbers on our inbound and outbound TLS usage in SMTP

As a result of POODLE, it's suddenly rather interesting to find out the volume of SSLv3 usage that you're seeing. Fortunately for us, Exim directly logs the SSL/TLS protocol version in a relatively easy to search for format; it's recorded as the 'X=...' parameter for both inbound and outbound email. So here's some statistics, first from our external MX gateway for inbound messages and then from our other servers for external deliveries.

Over the past 90 days, we've received roughly 1.17 million external email messages. 389,000 of them were received with some version of SSL/TLS. Unfortunately our external mail gateway currently only supports up to TLS 1.0, so the only split I can report is that only 130 of these messages were received using SSLv3 instead of TLS 1.0. 130 messages is low enough for me to examine the sources by hand; the only particularly interesting and eyebrow-raising ones were a couple of servers at a US university and a .nl ISP.

(I'm a little bit surprised that our Exim doesn't support higher TLS versions, to be honest. We're using Exim on Ubuntu 12.04, which I would have thought would support something more than just TLS 1.0.)

On our user mail submission machine, we've delivered to 167,000 remote addresses over the past 90 days. Almost all of them, 158,000, were done with SSL/TLS. Only three of them used SSLv3 and they were all to the same destination; everything else was TLS 1.0.

(It turns out that very few of our user submitted messages were received with TLS, only 0.9%. This rather surprises me but maybe many IMAP programs default to not using TLS even if the submission server offers it. All of these small number of submissions used TLS 1.0, as I'd hope.)

Given that our Exim version only supports TLS 1.0, these numbers are more boring than I was hoping they'd be when I started writing this entry. That's how it goes sometimes; the research process can be disappointing as well as educating.

(I did verify that our SMTP servers really only do support up to TLS 1.0 and it's not just that no one asked for a higher version than that.)

One set of numbers I'd like to get for our inbound email is how TLS usage correlates with spam score. Unfortunately our inbound mail setup makes it basically impossible to correlate the bits together, as spam scoring is done well after TLS information is readily available.

Sidebar: these numbers don't quite mean what you might think

I've talked about inbound message deliveries and outbound destination addresses here because that's what Exim logs information about, but of course what is really encrypted is connections. One (encrypted) connection may deliver multiple inbound messages and certainly may be handed multiple RCPT TO addresses in the same conversation. I've also made no attempt to aggregate this by source or destination, so very popular sources or destinations (like, say, Gmail) will influence these numbers quite a lot.

All of this means that this sort of numbers can't be taken as an indication of how many sources or destinations do TLS with us. All I can talk about is message flows.

(I can't even talk about how many outgoing messages are completely protected by TLS, because to do that I'd have to work out how many messages had no non-TLS deliveries. This is probably possible with Exim logs, but it's more work than I'm interested in doing right now. Clearly what I need is some sort of easy to use Exim log aggregator that will group all log messages for a given email message together and then let me do relatively sophisticated queries on the result.)

by cks at October 21, 2014 03:28 AM

October 20, 2014

Everything Sysadmin

See you tomorrow evening at the Denver DevOps Meetup!

Hey Denver folks! Don't forget that tomorrow evening (Tue, Oct 21) I'll be speaking at the Denver DevOps Meetup. It starts at 6:30pm! Hope to see you there!

October 20, 2014 04:28 PM

Mark Shuttleworth

V is for Vivid

Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+.

And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used.

Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team.

To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again.

This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust.

In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet!

by mark at October 20, 2014 01:22 PM

Google Blog

DISTRICT VOICES: Inside Panem with our finest citizens

Meet District Voices, the latest campaign in our Art, Copy & Code project—where we explore new ways for brands to connect with consumers through experiences that people love, remember and share. District Voices was created in partnership with Lionsgate to promote the upcoming release of The Hunger Games: Mockingjay Part 1. -Ed.

Greetings, Citizens of Panem!

The Capitol has joined forces with Google and YouTube to celebrate the proud achievements of our strong, lively districts. Premiering today on YouTube, a new miniseries called DISTRICT VOICES will take you behind the scenes to meet some of Panem’s most creative—and loyal—citizens.

At 4 p.m. EDT/ 1 p.m. PDT every day this week, one of your favorite Citizen creators from YouTube will give you a never-before-seen tour of their districts. First, the Threadbanger textile experts of District 8 will show how utility meets beauty in this season’s fashion—plus, you’ll get a look at a new way to wear your Capitol pride. Tomorrow, District 2's Shane Fazen will provide a riveting demonstration of how we keep our noble peacekeepers in tip-top shape. On Wednesday, Derek Muller from District 5—Panem’s center of power generation—will give you a peek at a revolutionary new way to generate electricity. Thursday The Grain District’s own Feast of Fiction will show you how to bake one of beloved victor Peeta Mellark’s most special treats. And finally, iJustine, District 6’s liaison to the Capitol, will give you an exclusive glimpse at the majestic and powerful peacekeeper vehicles in action.

Tune in at CAPITOL TV. And remember—Love your labor. Take pride in your task. Our future is in your hands.

by Emily Wood ( at October 20, 2014 10:05 AM

Tech Teapot

New Aviosys IP Power 9858 Box Opening

A series of box opening photos of the new Aviosys IP Power 9858 4 port network power switch. This model will in due course replace the Aviosys IP Power 9258 series of power switches. The 9258 series is still available in the mean time though, so don’t worry.

The new model supports WiFi (802.11n-b/g and WPS for easy WiFi setup), auto reboot on ping failure, time of day scheduler and internal temperature sensor. Aviosys have also built apps for iOS and Android, so you can now manage your power switch on the move. Together with the 8 port Aviosys IP Power 9820 they provide very handy tools for remote power management of devices. Say goodbye to travelling to a remote site just to reboot a broadband router.

Aviosys IP Power 9858DX Closed Box Aviosys IP Power 9858DX Open Box Aviosys IP Power 9858DX Front with Wifi Aerial Aviosys IP Power 9858DX Front Panel Aviosys IP Power 9858DX Rear Panel Aviosys IP Power 9858DX Read Close Up #2


The post New Aviosys IP Power 9858 Box Opening appeared first on Openxtra Tech Teapot.

by Jack Hughes at October 20, 2014 07:00 AM

Chris Siebenmann

Revisiting Python's string concatenation optimization

Back in Python 2.4, CPython introduced an optimization for string concatenation that was designed to reduce memory churn in this operation and I got curious enough about this to examine it in some detail. Python 2.4 is a long time ago and I recently was prompted to wonder what had changed since then, if anything, in both Python 2 and Python 3.

To quickly summarize my earlier entry, CPython only optimizes string concatenations by attempting to grow the left side in place instead of making a new string and copying everything. It can only do this if the left side string only has (or clearly will have) a reference count of one, because otherwise it's breaking the promise that strings are immutable. Generally this requires code of the form 'avar = avar + ...' or 'avar += ...'.

As of Python 2.7.8, things have changed only slightly. In particular concatenation of Unicode strings is still not optimized; this remains a byte string only optimization. For byte strings there are two cases. Strings under somewhat less than 512 bytes can sometimes be grown in place by a few bytes, depending on their exact sizes. Strings over that can be grown if the system realloc() can find empty space after them.

(As a trivial root, CPython also optimizes concatenating an empty string to something by just returning the other string with its reference count increased.)

In Python 3, things are more complicated but the good news is that this optimization does work on Unicode strings. Python 3.3+ has a complex implementation of (Unicode) strings, but it does attempt to do in-place resizing on them under appropriate circumstances. The first complication is that internally Python 3 has a hierarchy of Unicode string storage and you can't do an in-place concatenation of a more complex sort of Unicode string into a less complex one. Once you have compatible strings in this sense, in terms of byte sizes the relevant sizes are the same as for Python 2.7.8; Unicode string objects that are less than 512 bytes can sometimes be grown by a few bytes while ones larger than that are at the mercy of the system realloc(). However, how many bytes a Unicode string takes up depends on what sort of string storage it is using, which I think mostly depends on how big your Unicode characters are (see this section of the Python 3.3 release notes and PEP 393 for the gory details).

So my overall conclusion remains as before; this optimization is chancy and should not be counted on. If you are doing repeated concatenation you're almost certainly better off using .join() on a list; if you think you have a situation that's otherwise, you should benchmark it.

(In Python 3, the place to start is PyUnicode_Append() in Objects/unicodeobject.c. You'll probably also want to read Include/unicodeobject.h and PEP 393 to understand this, and then see Objects/obmalloc.c for the small object allocator.)

Sidebar: What the funny 512 byte breakpoint is about

Current versions of CPython 2 and 3 allocate 'small' objects using an internal allocator that I think is basically a slab allocator. This allocator is used for all overall objects that are 512 bytes or less and it rounds object size up to the next 8-byte boundary. This means that if you ask for, say, a 41-byte object you actually get one that can hold up to 48 bytes and thus can be 'grown' in place up to this size.

by cks at October 20, 2014 04:37 AM

October 19, 2014

Ubuntu Geek

Configuring layer-two peer-to-peer VPN using n2n

Sponsored Link
n2n is a layer-two peer-to-peer virtual private network (VPN) which allows users to exploit features typical of P2P applications at network instead of application level. This means that users can gain native IP visibility (e.g. two PCs belonging to the same n2n network can ping each other) and be reachable with the same network IP address regardless of the network where they currently belong. In a nutshell, as OpenVPN moved SSL from application (e.g. used to implement the https protocol) to network protocol, n2n moves P2P from application to network level.
Read the rest of Configuring layer-two peer-to-peer VPN using n2n (416 words)

© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to
Post tags: , , ,

Related posts

by ruchi at October 19, 2014 11:20 PM

Evaggelos Balaskas

SatNOGS - Satellite Networked Open Ground Station

What started as a Nasa Space App Challenge now becomes an extraordinary opensource achievement on the top five finalist of

What is SatNOGS in non technical words: imagine a cheap mobile openhardware ground station that can collaborate through the internet with other ground stations and gather satellite signals all together, participating in a holistic opensource/opendata and public accessible database/site !

If you are thinking, that cant be right, the answer is that it is!!!

The amazing team behind the SatNOGS is working around the clock - non stop ONLY with openhardware and free software to do exactly that !

A fully modular system (you can choose your own antennas! or base setup) you can review the entire code on github, you can see in high quality videos and guides for every step, every process, you can participate via comments, emails or even satellite signals !


3D Printing is one of the major component in their journey till now. The have already published every design they are using for the satnogs project on github! You just need to print them. Every non-3d printing hardware are available to every hardware store near by you. The members of this project have published the Arduino code and schematics for the electronics too !!

Everything is fully documented in details, everything is open source !



It’s seems that i may be bias, so dont believe anything i am writing.
See for your self and be mind-blowing impressed with the quality of their hardware documentation

Visit their facebook account for news and contact them if you have a brilliant idea about satellites or you just want to get a status of their work.

How about the team ?

I’ve met the entire team at Athens Hackerspace and the first thing that came into my mind (and it is most impressive) is the diversity of the members itself.

Not only in age (most of them are university students, but older hobbyists are participating too) but also in the technical area of expertise. This team can easily solve every practical problem they can find in the process.

SatNOGS, as I’ve already mentioned, is fully active and that all started (with the bing bang of-course) with an idea: To reach and communicate with the Space (the final frontier). Satellites are sending signals 24/7 and the ground stations cant reach every satellite (i am not talking to geo-static satellites) and there is no one to acknowledge that. The problem that the satnogs is solving is real.

And i hope with this blog post, more people can understand how important is that this project scale to more hackerspaces around the globe.

To see more, just click here and you can monitor the entire process till now.

Tag(s): SatNOGS

October 19, 2014 09:28 PM

Ferry Boender

Bexec v0.8: Execute a vim buffer and capture output in split window

I released v0.8 of my Bexec vim plugin. The Bexec plugin allows the user to execute the current buffer if it contains a script with a shebang (#!/path/to/interpreter) on the first line or if the default interpreter for the script's type is known by Bexec. The output of the script will be grabbed and displayed in a separate buffer. 

New in this release:

  • Honor splitbelow and splitright vim setting (patch by Christopher Pease).


Installation instructions:

  1. Download the Vimball
  2. Start vim with: vim bexec-v0.8.vmb
  3. In Vim, type: :source %
  4. Bexec is now installed. Type :Bexec to run it, or use <MapLeader>bx



by admin at October 19, 2014 01:22 PM

Server Density

Chris Siebenmann

Vegeta, a tool for web server stress testing

Standard stress testing tools like siege (or the venerable ab, which you shouldn't use) are all systems that do N concurrent requests at once and see how your website stands up to this. This model is a fine one for putting a consistent load on your website for a stress test, but it's not actually representative of how the real world acts. In the real world you generally don't have, say, 50 clients all trying to repeatedly make and re-make one request to you as fast as they can; instead you'll have 50 new clients (and requests) show up every second.

(I wrote about this difference at length back in this old entry.)

Vegeta is a HTTP load and stress testing tool that I stumbled over at some point. What really attracted my attention is that it uses a 'N requests a second' model, instead of the concurrent request model. As a bonus it will also report not just average performance but also on outliers in the form of 90th and 99th percentile outliers. It's written in Go, which some of my readers may find annoying but which I rather like.

I gave it a try recently and, well, it works. It does what it says it does, which means that it's now become my default load and stress testing tool; 'N new requests a second' is a more realistic and thus interesting test than 'N concurrent requests' for my software (especially here, for obvious reasons).

(I may still do N concurrent requests tests as well, but it'll probably mostly be to see if there are issues that come up under some degree of consistent load and if I have any obvious concurrency race problems.)

Note that as with any HTTP stress tester, testing with high load levels may require a fast system (or systems) with plenty of CPUs, memory, and good networking if applicable. And as always you should validate that vegeta is actually delivering the degree of load that it should be, although this is actually reasonably easy to verify for a 'N new request per second' tester.

(Barring errors, N new requests a second over an M second test run should result in N*M requests made and thus appearing in your server logs. I suppose the next time I run a test with vegeta I should verify this myself in my test environment. In my usage so far I just took it on trust that vegeta was working right, which in light of my ab experience may be a little bit optimistic.)

by cks at October 19, 2014 06:04 AM

October 18, 2014


For other Movable Type blogs out there

If you're wondering why comments aren't working, as I was, and are on shared hosting, as I am, and get to looking at your error_log file and see something like this in it:

[Sun Oct 12 12:34:56 2014] [error] [client] 
ModSecurity: Access denied with code 406 (phase 2).
Match of "beginsWith http://%{SERVER_NAME}/" against "MATCHED_VAR" required.
[file "/etc/httpd/modsecurity.d/10_asl_rules.conf"] [line "1425"] [id "340503"] [rev "1"]
[msg "Remote File Injection attempt in ARGS (/cgi-bin/mt4/mt-comments.cgi)"]
[severity "CRITICAL"]
[hostname ""]
[uri "/cgi-bin/mt/mt-comments.cgi"]
[unique_id "PIMENTOCAKE"]

It's not just you.

It seems that some webhosts have a mod_security rule in place that bans submitting anything through "mt-comments.cgi". As this is the main way MT submits comments, this kind of breaks things. Happily, working around a rule like this is dead easy.

  1. Rename your mt-comments.cgi file to something else
  2. Add "CommentScript ${renamed file}" to your mt-config.cgi file

And suddenly comments start working again!

Except for Google, since they're deprecating OpenID support.

by SysAdmin1138 at October 18, 2014 09:46 PM

Rands in Repose

Chris Siebenmann

During your crisis, remember to look for anomalies

This is a war story.

Today I had one of those valuable learning experiences for a system administrator. What happened is that one of our old fileservers locked up mysteriously, so we power cycled it. Then it locked up again. And again (and an attempt to get a crash dump failed). We thought it might be hardware related, so we transplanted the system disks into an entirely new chassis (with more memory, because there was some indications that it might be running out of memory somehow). It still locked up. Each lockup took maybe ten or fifteen minutes from the reboot, and things were all the more alarming and mysterious because this particular old fileserver only had a handful of production filesystems still on it; almost all of them had been migrated to one of our new fileservers. After one more lockup we gave up and went with our panic plan: we disabled NFS and set up to do an emergency migration of the remaining filesystems to the appropriate new fileserver.

Only as we started the first filesystem migration did we notice that one of the ZFS pools was completely full (so full it could not make a ZFS snapshot). As we were freeing up some space in the pool, a little light came on in the back of my mind; I remembered reading something about how full ZFS pools on our ancient version of Solaris could be very bad news, and I was pretty sure that earlier I'd seen a bunch of NFS write IO at least being attempted against the pool. Rather than migrate the filesystem after the pool had some free space, we selectively re-enabled NFS fileservice. The fileserver stayed up. We enabled more NFS fileservice. And things stayed happy. At this point we're pretty sure that we found the actual cause of all of our fileserver problems today.

(Afterwards I discovered that we had run into something like this before.)

What this has taught me is during an inexplicable crisis, I should try to take a bit of time to look for anomalies. Not specific anomalies, but general ones; things about the state of the system that aren't right or don't seem right.

(There is a certain amount of hindsight bias in this advice, but I want to mull that over a bit before I wrote more about it. The more I think about it the more complicated real crisis response becomes.)

by cks at October 18, 2014 04:55 AM

Giri Mandalika

Blast from the Past : The Weekend Playlist #7

Previous playlists:

    #1 (50s, 60s and 70s) | #2 (80s) | #3 (80s) | #4 (80s) | #5 (80s) | #6 (90s)

Audio-Visual material courtesy: YouTube. Other information: Wikipedia.

1. Fatboy Slim / Norman Cook - Brimful of Asha (1998)

A remix. Original by UK band Cornershop.

2. Vanilla Ice - Ice Ice Baby (1990)

3. Beck - Loser (1993)

4. Primus - Mr. Krinkle (1993)

5. Tool - Stinkfist (1996)

if you don't mind watching dark videos, look for Stinkfist official video on youtube.

6. P.M. Dawn - Set Adrift On Memory Bliss (1991)

7. Primitive Radio Gods - Standing Outside A Broken Phone Booth (1996)

no traces of official video anywhere on web, for some reason.

8. Blues Traveler - Run-Around (1995)

Grammy winner.

9. KoRn - A.D.I.D.A.S. (1997)

Under Pressure mix. Another dark song that has nothing to do with sportswear brand, Adidas.

10. Chumbawamba - Tubthumping (1997)

one hit wonder.

by Giri Mandalika ( at October 18, 2014 01:00 AM

October 17, 2014

Everything Sysadmin

Usenix LISA early registration discount expires soon!

Register by Mon, October 20 and take advantage of the early bird pricing.

I'll be teaching tutorials on managing oncall, team-driven sysadmin tools, upgrading live services and more. Please register soon and save!

October 17, 2014 05:28 PM

Standalone Sysadmin

VM Creation Day - PowerShell and VMware Automation

I should have ordered balloons and streamers, because Monday was VM creation day on my VMware cluster.

In addition to a 3-node production-licensed vSphere cluster, I run a 10-node cluster specifically for academic purposes. One of those purposes is building and maintaining classroom environments. A lot of professors maintain a server or two for their courses, but our Information Assurance program here goes above and beyond in terms of VM utilization. Every semester, I've got to deal with the added load, so I figured if I'm going to document it, I might as well get a blog entry while I'm at it.vmware_ia_spinup

Conceptually, the purpose of this process is to allow an instructor to create a set of virtual machines (typically between 1 and 4 of them), collectively referred to as a 'pod', which will serve as a lab for students. Once this set of VMs is configured exactly as the professor wants, and they have signed off on them, those VMs become the 'Gold Images', and then each student gets their own instance of these VMs. A class can have between 10 and 70 students, so this quickly becomes a real headache to deal with, hence the automation.

Additionally, because these classes are Information Assurance courses, it's not uncommon for the VMs to be configured in an insecure manner (on purpose) and to be attacked by other VMs, and to generally behave in a manner unbecoming a good network denizen, so each class is cordoned off onto its own VLAN, with its own PFsense box guarding the entryway and doing NAT for the several hundred VMs behind the wall. The script needs to automate the creation of the relevant PFsense configs, too, so that comes at the end.

I've written a relatively involved PowerShell script to do my dirty work for me, but it's still a long series of things to go from zero to working classroom environment. I figured I would spend a little time to talk about what I do to make this happen. I'm not saying it's the best solution, but it's the one I use, and it works for me. I'm interested in hearing if you've got a similar solution going on. Make sure to comment and let everyone know what you're using for these kinds of things.

The process is mostly automated hard parts separated by manual staging, because I want to verify sanity at each step. This kind of thing happens infrequently enough that I'm not completely trusting of the process yet, mostly due to my own ignorance of all of the edge cases that can cause failures. To the right, you'll see a diagram of the process.

In the script, the first thing I do is include functions that I stole from an awesome post on Subnet Math with PowerShell from Indented!, a software blog by Chris Dent. Because I'm going to be dealing with the DHCP config, it'll be very helpful to be able to have functions that understand what subnet boundaries are, and how to properly increment IP addresses.

I need to make sure that, if this powershell script is running, that we are actually loading the VMware PowerCLI commandlets. We can do that like this:

if ( ( Get-PSSnapin -name VMware.VimAutomation.Core -ErrorAction SilentlyContinue ) -eq $null ) {
Add-PSSnapin VMware.VimAutomation.Core

For the class itself, this whole process consists of functions to do what needs to be done (or "do the needful" if you use that particular phrase), and it's fairly linear, and each step requires the prior to be completed. What I've done is to create an object that represents the course as a whole, and then add the appropriate properties and methods. I don't actually need a lot of the power of OOP, but it provides a convenient way to keep everything together. Here's an example of the initial class setup:

$IA = New-Object psobject

# Lets add some initial values
Add-Member -InputObject $IA -MemberType NoteProperty -Name ClassCode -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name Semester -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name Datastore -Value "FASTDATASTORENAME"
Add-Member -InputObject $IA -MemberType NoteProperty -Name Cluster -Value "IA Program"
Add-Member -InputObject $IA -MemberType NoteProperty -Name VIServer -Value "VSPHERE-SERVER"
Add-Member -InputObject $IA -MemberType NoteProperty -Name IPBlock -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name SubnetMask -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name Connected -Value $false
Add-Member -InputObject $IA -MemberType NoteProperty -Name ResourcePool -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name PodCount -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name GoldMasters -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name Folder -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name MACPrefix -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name ConfigDir -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name VMarray -Value @()

These are just the values that almost never change. Since we're using NAT, and we're not routing to that network, and every class has its own dedicated VLAN, we can use the same IP block every time without running into a problem. The blank values are there just as placeholder, and those values will be filled in as the class methods are invoked.

At the bottom of the script, which is where I spend most of my time, I set per-class settings:

$IA.ClassCode = "ia1234"
$IA.Semester = "Fall-2014"
$IA.PodCount = 35
$IA.GoldMasters = @(
vmname = "ia1234-win7-gold-20141014"
osname = "win7"
tcp = 3389
udp = ""
vmname = "ia1234-centos-gold-20141014"
osname = "centos"
tcp = ""
udp = ""
vmname = "ia1234-kali-gold-20141014"
osname = "kali"
tcp = "22"
udp = ""

We set the class code, semester, and pod count simply. These will be used to create the VM names, the folders, and resource groups that the VMs live in. The GoldMaster array is a data structure that has an entry for each of the gold images that the professor has created. It contains the name of the gold image, plus a short code that will be used to name the VM instances coming from it, and has a placeholder for the tcp and udp ports which need forwarded from the outside to allow internal access. I don't currently have the code in place that allows me to specify multiple port forwards, but that's going to be added, because I had a professor request 7(!) forwarded ports per VM in one of their classes this semester.

As you can see in the diagram, I'm using Linked Clones to spin up the students' pods. This has the advantage of saving diskspace and of completing quickly. Linked clones operate on a snapshot of the original disk image. Rather than actually have the VMs operate on the gold images, I do a full clone of the VM over to a faster datastore than the Ol' Reliable NetApp.

We add a method to the $IA object like this:

Add-Member -InputObject $IA -MemberType ScriptMethod -Name createLCMASTERs -Value {
# This is the code that converts the gold images into LCMASTERs
# Because you need to put a template somewhere, it makes sense to put it
# into the folder that the VMs will eventually live in themselves (thus saving
# yourself the effort of locating the right folder twice).
Process {
... stuff goes here

The core of this method is the following block, which actually performs the clone:

if ( ! (Get-VM -Name $LCMASTERName) ) {
try {
$presnap = New-snapshot -Name ("Autosnap: " + $(Get-Date).toString("yyyMMdd")) -VM $GoldVM -confirm:$false

$cloneSpec = new-object VMware.Vim.VirtualMachineCloneSpec
$cloneSpec.Location = New-Object VMware.Vim.VirtualMachineRelocateSpec
$cloneSpec.Location.Pool = ($IA.ResourcePool | Get-View).MoRef
$ = ($vm | Get-VMHost).MoRef
$cloneSpec.Location.Datastore = ($IA.Datastore | Get-View).MoRef
$cloneSpec.Location.DiskMoveType = [VMware.Vim.VirtualMachineRelocateDiskMoveOptions]::createNewChildDiskBacking
$cloneSpec.Snapshot = ($GoldVM | Get-View).Snapshot.CurrentSnapshot
$cloneSpec.PowerOn = $false

($GoldVM | Get-View).cloneVM( $LCMasterFolder.MoRef, $LCMASTERName, $cloneSpec)

Remove-snapshot -Snapshot $presnap -confirm:$false
catch [Exception] {
Write-Host "Error: " $_.Exception.Message
} else {
Write-Host "Template found with name $LCMasterName - not recreating"

(apologies for the lack of indentation)

If you're interested in doing this kind of thing, make sure you check out the docs for the createNewChildDiskBacking setting.

After the Linked Clone Masters have been created, then it's a simple matter of creating the VMs from each of them (using the $IA.PodCount value to figure out how many we need). They end up getting named something like $IA.ClassCode-$IA.Semester-$IA.GoldMasters[#].osname-pod$podcount which makes it easy to figure out what goes where when I have several classes running at once.

After the VMs have been created, we can start dealing with the network portion. I used to spin up all of the VMs, then loop through them and pull the MAC addresses to use with the DHCP config, but there were problems with that method. I found that a lot of the time, I'll need to rerun this script a few times per class, either because I've screwed something up or the instructor needs to make changes to the pod. When that happens, EACH TIME I had to re-generate the DHCP config (which is easy) and then manually insert it into PFsense (which is super-annoying).

Rather than do that every time, I eventually realized that it's much easier just to dictate what the MAC address for each machine is, and then it doesn't matter how often I rerun the script, the DHCP config doesn't change. (And yes, I'm using DHCP, but with static leases, which is necessary because of the port forwarding).

Here's what I do:

Add-Member -InputObject $IA -MemberType ScriptMethod -Name assignMACs -Value {
Process {
$StaticPrefix = "00:50:56"
if ( $IA.MACPrefix -eq "" ) {
# Since there isn't already a prefix set, it's cool to make one randomly
$IA.MACPrefix = $StaticPrefix + ":" + ("{0:X2}" -f (Get-Random -Minimum 0 -Maximum 63) )
$machineCount = 0
$IA.VMarray | ForEach-Object {
$machineAddr = $IA.MACPrefix + ":" + ("{0:X4}" -f $machineCount).Insert(2,":")

$vm = Get-VM -name $
$networkAdapter = Get-NetworkAdapter -VM $vm
Write-Host "Setting $vm to $machineAddr"
Set-NetworkAdapter -NetworkAdapter $networkAdapter -MacAddress $machineAddr -Confirm:$false
$IA.VMarray[$machineCount].MAC = $machineAddr
$IA.VMarray[$machineCount].index = $machineCount


As you can see, this randomly assigns a MAC address in the vSphere range. Sort of. The fourth octet is randomly selected between 00 and 3F, and then the last two octets are incremented starting from 00. Optionally, the fourth octet can be specified, which is useful in a re-run of the script so that the DHCP config doesn't need to be re-generated.

After the MAC addresses are assigned, the IPs can be determined using the network math:

Add-Member -InputObject $IA -MemberType ScriptMethod -Name assignIPs -Value {
# This method really only assigns the IP to the object.
Process {
# It was tempting to assign a sane IP block to this network, but given the
# tendancy to shove God-only-knows how many people into a class at a time,
# lets not be bounded by reasonable or sane. /16 it is.
# First 50 IPs are reserved for gateway plus potential gold images.
$currentIP = Get-NextIP $IA.IPBlock 2
$IA.VMarray | ForEach-Object {
$_.IPAddr = $currentIP
$currentIP = Get-NextIP $currentIP 2


This is done by naively giving every other IP to a machine, leaving the odd IP addresses between them open. I've had to massage this before, where a large pod of 5-6 VMs all need to be incremental then skip IPs between them, but I've done those mostly as a one-off. I don't think I need to build in a lot of flexibility because those are relatively rare cases, but it wouldn't be that hard to develop a scheme for it if you needed.

After the IPs are assigned, you can create the DHCP config. Right now, I'm using an ugly hack, where I basically just print out the top of the DHCP config, then loop through the VMs outputting XML the whole way. It's ugly, and I'm not going to paste it here, but if you download a DHCPD XML file from PFsense, then you can basically see what I'm doing. I then do the same thing with the NAT config.

Because I'm still running these functions manually, I have these XML-creation methods printing output, but it's easy to see how you could have them redirect output to a text file (and if you were super-cool, you could use something like this example from MSDN where you spin up an instance of IE:

$ie = new-object -com "InternetExplorer.Application"
... and so on

Anyway, I've spun up probably thousands of VMs using this script (or previous instances of it). It's saved me a lot of time, and if you have to manage bulk-VMs using vSphere, and you're not automating it (using PowerCLI, or vCloud Director, or something else), you really should be. And if you DO, what do you do? Comment below and let me know!

Thanks for reading all the way through!

by Matt Simmons at October 17, 2014 03:16 PM

Google Blog

Through the Google lens: search trends October 10-16

Diet secrets from Zach Galifianakis, and cord cutting from a cable company?! Here's a look at another topsy-turvy week in search.

A cast of characters
Search will always have its fair share of characters and this week was no different. First up, moviegoers learned who’s next in line for Hollywood’s superhero treatment when Ezra Miller, star of The Perks of Being a Wallflower, landed the title role in the 2018 film The Flash. And whispers are swirling in Tinseltown that Gal Gadot's already impressive resume—she’s set to play the world’s most famous Amazonian, Wonder Woman—will soon get another stellar addition, the lead female role in a remake of Ben-Hur.

But they weren’t the only celebrities to get the Internet buzzing. Comedian and fan favorite Zach Galifianakis caused a stir on the trends charts after he revealed a much thinner version of himself on the red carpet of the New York Film Festival. When a reporter asked Galifianakis if he had made any lifestyle changes to lose the weight, he responded with a straight face, “No, I'm just... I'm dying.” Clearly Galifianakis isn’t sharing his weight loss secrets.

Out with the old, in with the new
HBO has seen the light! This week the premium television network announced that they will launch a new stand-alone service for fans of its TV shows. Soon, homes without a cable subscription can sign up for HBO Go and get their fill of Game of Thrones and other HBO shows with just an Internet connection—leading people to wonder if this is the beginning of the end for cable providers.

Consumers also had a lot of new mobile devices to choose from this week, starting with our own line of Nexus gadgets like the Nexus 6 running the latest version of Android, 5.0 Lollipop. Meanwhile, Apple announced an updated version of the iPad.
The show’s just getting started
Is it awards show season already? It’s not—but that’s not stopping searchers from looking ahead. The Internet rejoiced when How I Met Your Mother and Gone Girl star Neil Patrick Harris said “Hosting the 2015 Academy Awards? Challenge accepted!” But with the Oscars red carpet still months away, searchers had their sights set on another celebrity bash: Paul Rudd's keg party… at his mom’s house… in the suburbs of Kansas City. What else are you supposed to do when mom’s out of town and the KC Royals just punched a ticket to the World Series after a nearly 30-year hiatus?

Tip of the week
‘Tis the season for pumpkin spice beers? Next time you’re in a new town and looking to grab a cold one just say “Ok Google, show me pubs near my hotel” and find your new favorite haunt.

by Emily Wood ( at October 17, 2014 02:36 PM

Chris Siebenmann

My experience doing relatively low level X stuff in Go

Today I wound up needing a program that spoke the current Firefox remote control protocol instead of the old -remote based protocol that Firefox Nightly just removed. I had my choice between either adding a bunch of buffer mangling to a very old C program that already did basically all of the X stuff necessary or trying to do low-level X things from a Go program. The latter seemed much more interesting and so it's what I did.

(The old protocol was pretty simple but the new one involves a bunch of annoying buffer packing.)

Remote controlling Firefox is done through X properties, which is a relatively low level part of the X protocol (well below the usual level of GUIs and toolkits like GTK and Qt). You aren't making windows or drawing anything; instead you're grubbing around in window trees and getting obscure events from other people's windows. Fortunately Go has low level bindings for X in the form of Andrew Gallant's X Go Binding and his xgbutil packages for them (note that the XGB documentation you really want to read is for xgb/xproto). Use of these can be a little bit obscure so it very much helped me to read several examples (for both xgb and xgbutil).

All told the whole experience was pretty painless. Most of the stumbling blocks I ran into were because I don't really know X programming and because I was effectively translating from an older X API (Xlib) that my original C program was using to XCB, which is what XGB's API is based on. This involved a certain amount of working out what old functions that the old code was calling actually did and then figuring out how to translate them into XGB and xgbutil stuff (mostly the latter, because xgbutil puts a nice veneer over a lot of painstaking protocol bits).

(I was especially pleased that my Go code for the annoying buffer packing worked the first time. It was also pretty easy and obvious to write.)

One of the nice little things about using Go for this is that XGB turns out to be a pure Go binding, which means it can be freely cross compiled. So now I can theoretically do Firefox remote control from essentially any machine I remotely log into around here. Someday I may have a use for this, perhaps for some annoying system management program that insists on spawning something to show me links.

(Cross machine remote control matters to me because I read my email on a remote machine with a graphical program, and of course I want to click on links there and have them open in my workstation's main Firefox.)

Interested parties who want either a functional and reasonably commented example of doing this sort of stuff in Go or a program to do lightweight remote control of Unix Firefox can take a look at the ffox-remote repo. As a bonus I have written down in comments what I now know about the actual Firefox remote control protocol itself.

by cks at October 17, 2014 04:55 AM

Byron Miller

Austin Puppet Users Group – Join our Meetup!

It’s been too long since our initial meetup so i’m thrilled to be getting some dates on the calendar. Right now, we plan on having the meetup be the 2nd Tuesday of each month  from 6:30 pm until 8ish with a special meetup on the 28th of October so we can have a PuppetConf 2015 recap and […]

by byronm at October 17, 2014 12:41 AM

October 16, 2014

Ubuntu Geek

UbuTricks – Script to install the latest versions of several games and applications in Ubuntu

UbuTricks is a program that helps you install the latest versions of several games and applications in Ubuntu.

UbuTricks is a Zenity-based, graphical script with a simple interface. Although early in development, its aim is to create a simple, graphical way of installing updated applications in Ubuntu 14.04 and future releases.
Read the rest of UbuTricks – Script to install the latest versions of several games and applications in Ubuntu (220 words)

© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to
Post tags: , ,

Related posts

by ruchi at October 16, 2014 11:17 PM

Aaron Johnson

Day 6: Volcanos, bubbling mud pots, swimming in nature baths and cows pooping: Iceland

NO VOMIT! We had an entire night with no one vomiting! Breakfast bright and early again and then we hit the road, big day today.

We drove for about 30 minutes and bagged our first geocache at a place called Tveir Brú and then did a nice hike up to a waterfall:

Said geocache was NOT kid friendly so again, if you’re ever in Iceland driving around Tveir Brú with small children, make sure they walk up to the edge of the 50 foot cliff where the cache is hidden with their head up, not staring at the GPS.

Second stop was at Krafla which apparently is a “caldera”, where we did a short (2 miles?) walk out over a flat plain:

to a small hill with a bunch of fissure vents and bubbling pools of waters:

On the way to the hill Reed made friends with about 10 lovely Japanese people, all of whom gave him (or he gave them?) a high five.

A 10 minute drive later and were we at Námafjall, which is a geothermal area with boiling mudpools and steaming fumaroles and if you’re not wearing disposable booties over your shoes like all the people getting off of the tour buses were, a giant mess for the car. Beck couldn’t take the smell and had to go back to the car (pretty sure he just wanted to finish the book he was reading) but we forced the little dudes to walk around with us. Mud pots were cool but I think this was my favorite:

We did lunch out of the trunk (peanut butter and jelly, apples and if you finished your sandwich, a cookie or two) and then we were back on the road again on our way to Lake Mývatn. Our first stop was at a place called Hverfjall, which is “… a tephra cone or tuff ring volcano in northern Iceland, to the east of Mývatn” as my friends at Wikipedia say. We did the 4×4 gravel road out to the base of the volcano and after a bit of prodding to get up this trail:

everyone made it to the top of the mountain, although not without us having to break out the “You’re a mountain lion! Growl like one!” strategy to get certain people up the hill that weren’t excited about climbing the mountain:

which everyone had to join in on:

but eventually all led to this:

Cool spot.

Our slave driver navigator then proceeded to direct us to Dimmuborgir (a large area of unusually shaped lava fields east of Mývatn) where we did yet another short hike out to see one of the structures which supposedly looks like an old cathedral:

Pretty sure at this point that everyone was really cold and tired and we completely lucked out because the very next thing we did after getting back into the car was a bath in the natural hot springs:

which was AMAZING for all involved, especially people that had floaties. Highly recommended if you’re ever driving through northern Iceland after a long day of volcano hiking and mud pot viewing.

Day only got better though because I found an amazing restaurant / farm on Foursquare called Vogafjos Cowshed Cafe, where we (adults) not only had an EPIC meal (braised lamb shanks and pan fried artic char) but we got to watch cows pooping and peeing and then eventually get milked which if you’re between the ages of 2 and 5 is PRETTY COOL:

Finally, we drove to the hotel, which was another 30 miles away and everyone hit the sack.

  • Weird mud pots and fissures: too many to count
  • People that had to be encouraged to make mountain lion sounds to bring out their inner lion to make it up the mountain: 1
  • People vomiting at night: 0 (WOOHOO!)
  • Geocaches: 2

by ajohnson at October 16, 2014 10:28 PM

Day 5: The Farm and lots of driving: Iceland

We spent the night at a farm (Brunnholl) that, like most places, was in the absolute middle of nowhere and got there the night before after it got dark so when we woke up (after the middle one threw up a minimum of 10 times in the middle of the night) we had some breakfast and then took a bit of time to look around. First thing we noticed outside was a border collie named Mila, who was carrying a stick and very obviously wanted me to play with her. The dudes and I exited through the side of the dining room and stumbled on to not only a very playful and determined dog but also a trampoline (partially frozen) and a sandbox full of black sand, which all combined to occupy us for at least 15 minutes before we discovered the other side of the farm, starting with the ATV:

which I’m pretty sure Reed would have figured out how to drive away if the key had been in the ignition. But better than the ATV were the frozen puddles, all of which had to be stomped on or hit with a stick and the Icelandic horses that they had in pasture, one of which I became very close friends with:

Pretty sure she wanted to get in the car with us to go on the rest of the trip, sadly we had no room for her. We ended up sticking around the farm and meeting the cows, hanging with the horses and generally having a relaxing farm morning until about 10:30 or 11am, much later than our normal departures.

This turned out to be a good thing because there wasn’t much to see or at least there wasn’t much that we stopped to see on Day 5. We ended up nabbing a geocache (always good to get out, stretch the legs, pee and get some little dude energy out) that was on a side road and then made it to have lunch at Kaffi Steinn in Djúpivogur, which is teensy little town right on the water.

Back in the saddle an hour later, we drove and drove… and then on a whim I pulled off at a black sand beach that turned out to have some good climbing and rock throwing facilities that gave everyone a breather from being the car:

and then we turned inland and drove through some beautiful mountain ranges:

although the entire country is a giant beautiful mountain range in some ways (would be a good bet by the way, I doubt you could be anywhere in Iceland on a clear day and not be able to see a giant mountain range somewhere).

We eventually made it to our final destination, which turned out to be a newly renovated hotel called Gistihúsið Egilsstöðum in Egilsstaðir which was VERY nice compared to where we had been staying. We dropped our stuff off and then immediately got back in the car to go and see a lake that supposedly had a monster in it, dropped off our first “trackable” geocache on the way to that:

and then drove over to see a waterfall that ended up being a hike that we couldn’t make before the sun went down. Dinner at Subway because it was cheap. Note: no meatball subs in Europe.

  • Ice puddles smashed: too many to count
  • Icelandic horses that are my best friend: 1
  • People vomiting at night: 1
  • Geocaches: 2

by ajohnson at October 16, 2014 09:43 PM


The new economy and systems administration

"Over the next few decades demand in the top layer of the labor market may well centre on individuals with high abstract reasoning, creative, and interpersonal skills that are beyond most workers, including graduates."
-Economist, vol413/num8907, Oct 4, 2014, "Special Report: The Third Great Wave. Productivity: Technology isn't Working"

The rest of the Special Report lays a convincing argument that people who have automation-creation as part of their primary job duties are in for quite a bit of growth and that people in industries subject to automation are going to have a hard time of it. This has a direct impact to sysadminly career direction.

In the past decate Systems Administration has been moving away from mechanics who deploy hardware, install software and fix problems and towards Engineers who are able to build automation for provisioning new computing instances, installing application frameworks, and know how to troubleshoot problems with all of that. In many ways we're a specialized niche of Software Engineering now, and that means we can ride the rocket with them. If you want to continue to have a good job in the new industrial revolution, keep plugging along and don't become the dragon in the datacenter people don't talk to.

Abstract Reasoning

Being able to comprehend how a complex system works is a prime example of abstract reasoning. Systems Administration is more than just knowing the arcana of init, grub, or WMI; we need to know how systems interact with each other. This is a skill that has been a pre-requisite for Senior Sysadmins for several decades now, so isn't new. It's already on our skill-path. This is where System Engineers make their names, and sometimes become Systems Architects.


This has been less on our skill-path, but is definitely something we've been focusing on in the past decade or so. Building large automation systems, even with frameworks such as Puppet or Chef, takes a fair amount of both abstract reasoning and creativity. If you're good at this, you've got 'creative' down.

This has impacts for the lower rungs of the sysadmin skill-ladder. Brand new sysadmins are going to be doing less racking-and-stacking and more parsing and patching ruby or ruby-like DSLs.

Interpersonal Skills

This is where sysadmins tend to fall down. A lot of us got into this gig because we didn't have to talk to people who weren't other sysadmins. Technology made sense, people didn't.

This skill is more a reflection of the service-oriented economy, and sysadmins are only sort of that, but our role in product creation and maintenance is ever more social these days. If you're one of two sysadmin-types in a company with 15 software engineers, you're going to have to learn how to have a good relationship with software engineers. In olden days, only very senior sysadmins had to have the Speaker to Management skill, now even mid-levels need to be able to speak coherently to technical and non-technical management.

It is no coincidence that many of the tutorials at conferences like LISA are aimed at building business and social skills in sysadmins. It's worth your time to attend them, since your career advancement depends on it.

Yes, we're well positioned to do well in the new economy. We just have to make a few changes we've known about for a while now.

by SysAdmin1138 at October 16, 2014 05:10 PM

Rands in Repose

The First Addition to the Cloud Classification System in Half a Century

But soon after launching the site, Pretor-Pinney received a couple pictures that didn’t quite fit into existing classifications. One image, taken from the 12th floor of an office building in Cedar Rapids, Iowa, looked positively apocalyptic — a violent and undulating thing menacing the city skyline. “They struck me as being rather different from the normal undulates clouds,”


by rands at October 16, 2014 03:44 PM

Everything Sysadmin

Results of the PuppetConf 2014 Raffle

If you recall, the fine folks at Puppet Labs gave me a free ticket to PuppetConf 2014 to give away to a reader of this blog. Here's a report from our lucky winner!

Conference Report: PuppetConf 2014

by Anastasiia Zhenevskaia

You never know when you will be lucky enough to win a ticket to the PuppetConf, one of the greatest conferences of this year. My "moment" happened just 3 weeks before the conference and let me dive into things I've never thought about.

Being a person who worked mostly with the front-end development, I was always a little bit scared and puzzled by more complicated things. Fortunately, the Conference helped me to understand how important and simple all these processes could be. I was so impressed by personality of all speakers. Their eyes were full of passion, their presentations were clear, informational and breath-taking. Their attitude towards things they're working on - exceptional. Those are people you might want to work with, share ideas and create amazing things.

I'm so glad that I got this opportunity and wish that everybody could get this chance and taste the atmosphere of Puppet!

October 16, 2014 02:28 PM

Server Density

A guide to handling incidents, downtime and outages

Outages and downtime are inevitable. Designing your systems to handle failure is a key part of modern infrastructure architecture which makes it possible to survive most problems, however there will be incidents you didn’t think about, software bugs you didn’t catch and other events which result in downtime for your service.

Microsoft, Amazon and Google spend $billions every quarter and even they still have outages. How much do you spend?

There are some companies who constantly seem to have problems and suffer from it unnecessarily. Regular outages ultimately become unacceptable but if you adopt a few key principles and design your systems properly, the few times when you do have service incidents you can be forgiven by customers.

Step 1: Planning

If critical alerts result in panic and chaos then you deserve to suffer from the incident! There are a number of things you can do in advance to ensure that when something does go wrong, everyone on your team knows what they should be doing.

  • Put in place the right documentation. This should be easily accessible, searchable and up to date. We use Google Docs for this.
  • Use proper config management, be it Puppet, Chef, Ansible, Salt Stack or some other systems to be able to make mass changes to your infrastructure in a controlled manner. It also helps your team understand novel issues because the code that defines the setup is easily accessible.

Unexpected failures

Be aware of your whole system. Unexpected failures can come from unusual places. Are you hosted on AWS? What happens if they suffer an outage and you need to use Slack or Hipchat for internal communication? Are you hosted on Google Cloud? What happens if your GMail is unavailable during a Google Cloud outage? Are you using a data center within the city you live in? What happens if there’s a weather event and the phone service is knocked out?

Step 2: Be ready to handle the alerts

Some people hate being on call, others love it! Either way, you need a system to handle on call rotations, escalating issues to other members of the team, planning for reachability and allowing people to go off-call after incidents. We use PagerDuty on a weekly rotation through the team and consider things like who is available, internet connectivity, illness, holidays and looping in product engineering so issues waking people up can be resolved quickly.


More and more outages are being caused by software bugs getting into production because it’s never just a single thing that goes wrong – a cascade of problems all culminate to cause downtimeso you need rotations amongst different teams, such as frontend engineering, not just ops.

Step 3: Deal with it, using checklists

Have a defined process in place ready to run through whenever the alerts go off. Using a checklist removes unnecessary thinking so you can focus on the real problem, and ensures key actions are taken and not forgotten. Have a channel for communication both internally and externally – there’s nothing worse to be the customer of a service that is down and you have no idea if they’re working on it or not.

Google Docs Incident Handling

Step 4: Write up a detailed postmortem

This is the opportunity to win back trust. If you follow the steps above and provide accurate, useful information during the outage so people know what is going on, this is the chance to write it up, explain what happened, what went wrong and crucially, what you are going to do to prevent it from happening again. Outages highlight unknown system flaws and it’s important to tell your users that the hole no longer exists, or is in the process of being closed.

Interested in learning more?

We are going live on the internet in the form of a Q&A webinar on the 11th November 2014 @ 18:30 BST. We’ll be discussing things to consider when handling incidents, on-call rotations and outage status page communications. Join us for free!

The post A guide to handling incidents, downtime and outages appeared first on Server Density Blog.

by David Mytton at October 16, 2014 12:00 PM

Chris Siebenmann

Don't use dd as a quick version of disk mirroring

Suppose, not entirely hypothetically, that you initially set up a server with one system disk but have come to wish that it had a mirrored pair of them. The server is in production and in-place migration to software RAID requires a downtime or two, so as a cheap 'in case of emergency' measure you stick in a second disk and then clone your current system disk to it with dd (remember to fsck the root filesystem afterwards).

(This has a number of problems if you ever actually need to boot from the second disk, but let's set them aside for now.)

Unfortunately, on a modern Linux machine you have just armed a time bomb that is aimed at your foot. It may never go off, or it may go off more than a year and a half later (when you've forgotten all about this), or it may go off the next time you reboot the machine. The problem is that modern Linux systems identify their root filesystem by its UUID, not its disk location, and because you cloned the disk with dd you now have two different filesystems with the same UUID.

(Unless you do something to manually change the UUID on the cloned copy, which you can. But you have to remember that step. On extN filesystems, it's done with tune2fs's -U argument; you probably want '-U random'.)

Most of the time, the kernel and initramfs will probably see your first disk first and inventory the UUID on its root partition first and so on, and thus boot from the right filesystem on the first disk. But this is not guaranteed. Someday the kernel may get around to looking at sdb1 before it looks at sda1, find the UUID it's looking for, and mount your cloned copy as the root filesystem instead of the real thing. If you're lucky, the cloned copy is so out of date that things fail explosively and you notice immediately (although figuring out what's going on may take a bit of time and in the mean time life can be quite exciting). If you're unlucky, the cloned copy is close enough to the real root filesystem that things mostly work and you might only have a few little anomalies, like missing log files or mysteriously reverted package versions or the like. You might not even really notice.

(This is the background behind my recent tweet.)

by cks at October 16, 2014 06:14 AM

Google Webmasters

Best practices for XML sitemaps & RSS/Atom feeds

Webmaster level: intermediate-advanced

Submitting sitemaps can be an important part of optimizing websites. Sitemaps enable search engines to discover all pages on a site and to download them quickly when they change. This blog post explains which fields in sitemaps are important, when to use XML sitemaps and RSS/Atom feeds, and how to optimize them for Google.

Sitemaps and feeds

Sitemaps can be in XML sitemap, RSS, or Atom formats. The important difference between these formats is that XML sitemaps describe the whole set of URLs within a site, while RSS/Atom feeds describe recent changes. This has important implications:

  • XML sitemaps are usually large; RSS/Atom feeds are small, containing only the most recent updates to your site.
  • XML sitemaps are downloaded less frequently than RSS/Atom feeds.

For optimal crawling, we recommend using both XML sitemaps and RSS/Atom feeds. XML sitemaps will give Google information about all of the pages on your site. RSS/Atom feeds will provide all updates on your site, helping Google to keep your content fresher in its index. Note that submitting sitemaps or feeds does not guarantee the indexing of those URLs.

Example of an XML sitemap:

<?xml version="1.0" encoding="utf-8"?>
<urlset xmlns="">
   <!-- optional additional tags -->

Example of an RSS feed:

<?xml version="1.0" encoding="utf-8"?>
   <!-- other tags -->
     <!-- other tags -->
     <pubDate>Mon, 27 Jun 2011 19:34:00 +0100</pubDate>

Example of an Atom feed:

<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="">
 <!-- other tags -->
   <link href="" />
   <!-- other tags -->

“other tags” refer to both optional and required tags by their respective standards. We recommend that you specify the required tags for Atom/RSS as they will help you to appear on other properties that might use these feeds, in addition to Google Search.

Best practices

Important fields

XML sitemaps and RSS/Atom feeds, in their core, are lists of URLs with metadata attached to them. The two most important pieces of information for Google are the URL itself and its last modification time:


URLs in XML sitemaps and RSS/Atom feeds should adhere to the following guidelines:

  • Only include URLs that can be fetched by Googlebot. A common mistake is including URLs disallowed by robots.txt — which cannot be fetched by Googlebot, or including URLs of pages that don't exist.
  • Only include canonical URLs. A common mistake is to include URLs of duplicate pages. This increases the load on your server without improving indexing.
Last modification time

Specify a last modification time for each URL in an XML sitemap and RSS/Atom feed. The last modification time should be the last time the content of the page changed meaningfully. If a change is meant to be visible in the search results, then the last modification time should be the time of this change.

  • XML sitemap uses  <lastmod>
  • RSS uses <pubDate>
  • Atom uses <updated>

Be sure to set or update last modification time correctly:

  • Specify the time in the correct format: W3C Datetime for XML sitemaps, RFC3339 for Atom and RFC822 for RSS.
  • Only update modification time when the content changed meaningfully.
  • Don’t set the last modification time to the current time whenever the sitemap or feed is served.

XML sitemaps

XML sitemaps should contain URLs of all pages on your site. They are often large and update infrequently. Follow these guidelines:

  • For a single XML sitemap: update it at least once a day (if your site changes regularly) and ping Google after you update it.
  • For a set of XML sitemaps: maximize the number of URLs in each XML sitemap. The limit is 50,000 URLs or a maximum size of 10MB uncompressed, whichever is reached first. Ping Google for each updated XML sitemap (or once for the sitemap index, if that's used) every time it is updated. A common mistake is to put only a handful of URLs into each XML sitemap file, which usually makes it harder for Google to download all of these XML sitemaps in a reasonable time.


RSS/Atom feeds should convey recent updates of your site. They are usually small and updated frequently. For these feeds, we recommend:

  • When a new page is added or an existing page meaningfully changed, add the URL and the modification time to the feed.
  • In order for Google to not miss updates, the RSS/Atom feed should have all updates in it since at least the last time Google downloaded it. The best way to achieve this is by using PubSubHubbub. The hub will propagate the content of your feed to all interested parties (RSS readers, search engines, etc.) in the fastest and most efficient way possible.

Generating both XML sitemaps and Atom/RSS feeds is a great way to optimize crawling of a site for Google and other search engines. The key information in these files is the canonical URL and the time of the last modification of pages within the website. Setting these properly, and notifying Google and other search engines through sitemaps pings and PubSubHubbub, will allow your website to be crawled optimally, and represented accordingly in search results.

If you have any questions, feel free to post them here, or to join other webmasters in the webmaster help forum section on sitemaps.

by Google Webmaster Central ( at October 16, 2014 05:24 AM

October 15, 2014

Rands in Repose

The Old Guard

Dunbar’s Number is a favorite blunt diagnosis for the pains that affect rapidly growing teams. The number, which is somewhere between 100 and 250 describes a point at which a group of people can no longer effectively maintain social connections in their respective heads. What was simple from a communication perspective becomes costly. What was a familiar family that you saw wandering the hallway becomes Stranger Town.

It resonates. It intuitively feels right that we have a threshold for the number of relationships we can maintain in our heads. If your team or company is rapidly growing, it’s worth thinking about how you’re going to help the team feel connected, but I think there is a more interesting emergent behavior during rapid growth, and it’s led by The Old Guard.

They Won

Here’s the poetic origin story of The Old Guard:

A small group of inspired people has an idea, and just about everyone tells them the idea is really stupid, but that’s exactly the same response to the idea that they hear every other day. This small group ignores these naysayers and doggedly pursues the idea, even though on a daily basis it feels like the world is specifically designed to prevent them from succeeding.

It’s a war. The small group is at war with conventional wisdom; they are at war with every comparable startup that is remotely in the same space. But, most importantly, they are at war with themselves. In addition to fighting to bring the idea into the world, they are fighting amongst themselves.

Each day, this small group is learning who they are as part of their struggle to survive. They are learning each person’s strengths and weaknesses. They are figuring out how each person communicates, and each of these essential lessons is learned under the constant threat of irrelevance. These lessons are hard earned – some folks don’t make it – and those who survive this period of painful definition are tightly bound together. They share the same mental scars and they tell the same stories because they have an intimate shared history.

And then the Old Guard starts winning.

The New Guard

After years of struggling, the dream that became the idea becomes the business. A corner is turned and the question changes from, “Are we going to survive?” to “How are we going to scale?” As part of this acceleration program comes the arrival of eager new faces who have heard the stories of success in the face of adversity. They are inspired by these stories and they want to figure out how they can help.

When the New Guard shows up, they notice, well, beautiful, beautiful chaos. Ideas are coming from every direction, decisions are collaborative and high velocity because the team is small enough that you can efficiently ask everyone’s opinion. It’s intoxicating. Execution is shared and terrifyingly fast because there is little desire to bicker. Most everyone still believes they are on the brink of disaster. That’s mainly because they’ve lived in this world so long.

The organization of the Old Guard is instinctively flat. There is rapid and organic error correction because everyone has line of sight on everything. The cost of gathering situational awareness is low because the Old Guard has borderline mystical abilities to figure things out. This is because they’ve got a near-complete mental catalog of the people, their knowledge, and their abilities.

The Old Guard has recognized experience, but more importantly, each day the Old Guard demonstrates to the New Guard that they have instinct. They can rapidly make important decisions with the barest of facts and they have a sense of urgency motivated by their deeply rooted belief that this is the home that they built with their hands and, again, they believe this precious thing could be destroyed in a moment.

The Old Guard’s instinct is well earned and essential, but instinct doesn’t scale without help.

New Guard Friction

The divide that is created between the Old Guard and New Guard is interestingly paradoxical. See, the Old Guard recognizes there’s simply too much to do and there is no way the expertise now needed to evolve is under the roof. The problem is these new hires are a cure to a disease that the Old Guard both created and loves. I’ll explain.

The Old Guard hires eager people to build more amazing things, but each additional human creates a growing knowledge and communication tax. The team needs to spend time to make sure each new person understands the company, how things are done, who is responsible for what, and they eventually need to know their responsibilities. Pretty simple, right? Standard on-boarding, right? What about when it’s 10 people? Or 100? Multiply all their educational and communication needs with the fact that each of these new folks is going to add their own unique signal to the communication tapestry, each person is slightly altering the culture simply with their presence, and, oh yeah, everything is going to change in six months anyhow because the team is growing so fast.

The addition of these new people to the existing population transforms the comfortable chaos into legitimate chaos. Decisions start to happen more slowly, responsibility and ownership become opaque, execution becomes stove-piped, and work is duplicated because the organism has likely crossed Dunbar’s number. Situational awareness has become expensive because learning can no longer occur via osmosis.

The New Guard, armed with their new hire spirit and their lack of historical organizational instinct, starts on important work that the Old Guard both desires and hates at the same time.

The New Guard:

  • Starts to write things down both for themselves and for those who will come after them.
  • Sits down with different teams and agrees to contracts on how they will get work done.
  • Imports language from prior companies to support and define their various emerging causes. This language often comes in the form of important sounding, but equally mystifying, acronyms.
  • And they do a lot of this work via the scheduling of meetings.

The Old Guard’s healthy network of informational sources inside of the company (who are also primarily Old Guard) provides an increasingly worrying diagnosis: the New Guard is creating a lot of process that smells like big company bullshit. The Old Guard worries: they worry that all these eager new faces in their company are fundamentally changing the culture.

Here’s the rub: The Old Guard can’t scale their company without the help of the New Guard, but the Old Guard’s instincts about what works in this particular organism are based on lessons from the past rather than the requirements of the future. When the Old Guard is tested, when something goes sideways in the company, they fall back on what has always worked in the past, and while this strategy feels familiar and fast, it might not allow them to scale.

A Culture Quandary

The critique of this time of the rising power of the New Guard and their increasing skirmishes with the established Old Guard manifests in different ways: “We’re moving slower”, “I don’t know what’s going on”, “We feel like a big company”, or “We’re forgetting who we are.”

In order to build a healthy company that scales, you’re going to need to build infrastructure and process that is going to connect the various parts of your company. This work is going to feel heavy and unnecessary to those who’ve historically been able to do this work effortlessly and instinctively.

It is entirely possible that too much process or the wrong process is developed during this build-out, but when this inevitable debate occurs, the debate should not be about the process. It’s a debate about values. The first question isn’t, “Is this a good, bad, or efficient process?” The first question is, “How does this process reflect our values?”

The largest battles that I’ve seen at prior companies between the New Guard and the Old Guard exist because the Old Guard has not effectively documented and shared the values that the company embodies. This creates the following dialog:

  • Old Guard: I feel this process is heavy.
  • New Guard: I’ve seen this process work at a great many companies and here are the metrics to prove it.
  • Old Guard: Yeah, something doesn’t feel right.
  • New Guard: What the hell does feel have to do with it?

What is missing from this dialog is a discussion. The process feels heavy because in this particular hypothetical company, we value velocity over completeness. Whether they’ve written them down or not, the Old Guard embodies the initial values of the company and when they say, “It feels off…” what they are poorly articulating is, “This process that you’re building does not support one (or more) of the key values of the company.”

The Old Guard is the cultural bellwether of the company. I believe that culture is a slippery thing to fully define, but I do believe it is the responsibility of the Old Guard to not only take the time to define the key values that are the pillars of that culture, to communicate the nuance of those values over and over again, and, lastly, when it becomes apparent they are no longer serving the company, they must be willing to let those values evolve.

by rands at October 15, 2014 03:09 PM

Everything Sysadmin

Tutorial: Evil Genius 101

I'm teaching a tutorial at Usenix LISA called "Evil Genius 101: Subversive Ways to Promote DevOps and Other Big Changes".

Whether you are trying to bring "devops culture" to your workplace, or just get approval to purchase a new machine, convincing and influencing people is a big part of a system administrator's time.

For the last few years I've been teaching this class called "Evil Genius 101" where I reveal my tricks for understanding people and swaying their opinion. None of these are actually evil, nor do I teach negotiating techniques. I simply list 3-4 techniques I've found successful for each of these situations: talking to executives, talking to managers, talking to coworkers, and talking to users.

Seating is limited. Register now!

Evil Genius 101: Subversive Ways to Promote DevOps and Other Big Changes

Who should attend:

Sysadmins and managers looking to influence the technology and culture of your organization.


Monday, 10-Nov, 1:30pm-5pm at Usenix LISA


You want to innovate: deploy new technologies such as configuration management, kanban, a wiki, or standardized configurations. Your coworkers don't want change: they like the way things are. Therefore, they consider you evil. However you aren't evil, you just want to make things better. Learn how to talk your team, managers and executives into adopting DevOps techniques and culture.

Take back to work:

  • Help your coworkers understand and agree with your awesome ideas
  • Convince your manager about anything. Really.
  • Get others to trust you so they are more easily convinced
  • Deciding which projects to do when you have more projects than time
  • Turn the most stubborn user into your biggest fan
  • Make decisions based on data and evidence

Topics include:

  • DevOps "value mapping" exercise: Understand how your work relates to business needs.
  • So much to do! What should you do first?
  • How to sell ideas to executives, management, co-workers, and users.
  • Simple ways to display data to get your point across better.

Register today for Usenix LISA 2014!

October 15, 2014 02:28 PM

Ferry Boender

POODLE: SSLv3 bug summary

Yet Another SSL bug: This time a problem with SSLv3.

Most browsers and web servers support SSLv3. Many don't use it by default; instead opting for higher versions of SSL such as TLS v1.0+. The problem is that attackers can force a downgrade of the negotiated protocol, which will result in the SSLv3 protocol being used to communicate.

No real fixes are available and vendors will probably not be sending out updates to fix this issue. The recommended method of mitigation is to disable SSLv3 on your servers and your browsers. SSLv3 is old and only the following browsers can't work with anything better:

  • Internet Explorer up to (and including) v6
  • Opera v1 t/m 4 (current version is 12)

Other browsers (Firefox, Chrome, etc) have supported TLSv1.0+ from their first release.

To test if you're vulnerable:

openssl s_client -connect HOSTNAME:443 -ssl3

If you do NOT get a message saying something like "ssl handshake failure", your server is vulnerable.

A quick test (which I do not garantuee to be correct) is:

openssl s_client -connect -ssl3 2>/dev/null | grep "Server certificate"

If this returns "Server certificate", you're vulnerable.

To fix this for Apache, edit your SSL module configuration (/etc/apache2/mods-enabled/ssl.conf on Debian-derived systems) and add "-SSLv3" to your protocols to disable SSLv3:

SSLProtocol all -SSLv2 -SSLv3

This disabled SSLv2 and 3, which are both broken. It also means users with Internet Explorer 5 or 6 won't be able to reach your secure website anymore.


by admin at October 15, 2014 12:27 PM

Evaggelos Balaskas

read it later

a blog post about Wallabag

Tons of information are passing through your eyes every day. People now are browsing than reading and there are some things you really want to store and read them when you have some free time. Bookmarks are pretty useful for storing the url but the actual content could be moved somewhere else or even removed from the original place.

read-it-later applications have worked their magic and offline (or caching) storing the actual content to another location. Some of these applications (or online services) have the ability to synchronize their content to your tablet/smartphone or even your ebook reader. The most known service is, of course, pocket.

But then again you have to register to another online service that uses your email for userid and now knows every single thing you like to read! And what will happen if the company behind this service decides to close this or change their policy to sell yours info or hacked or …. whatever …. ?

Well that’s the nice thing about free software!

You can self-hosting your own application for saving web pages (aka read-it-later) with wallabag

Just download and extract the latest version inside your web server document root path:

cd /var/www/

wget -c -O
mv wallabag_VERSION wallabag

At this moment you have your own self-hosted read-it-later service.

You need to generate a token for apps to connect with your wallabag instance (login –> config –> Feeds –> generate token) and it will produce something like that:

Token: sd/sdfSDFsdffd20
User ID: 1

Add the firefox add-on from here and then you have to configure only your wallabag URL.

For your smartphone you can use this app
wallabag from F-droid

For this app you need to write the token so that you can synchronize your feeds to your phone.

Wallabag has many features - the most useful for me is the epub export. I can store my articles to my ebook reader !

How about security ? I dont care to setup wallabag under an SSL certificate or bother with “basic auth” login cause i store public articles !!! If someone obtains my credentials he/she/it can use wallabag to mesh with my articles (ok - i have backups) but he/she/it will not gain access to “private” information. That’s said - that dont mean that i dont value of the above (on the contrary) - is just a way to say that in my wallabag instance, i only store already public/publish web pages!

[Edit] UX - update - support - donate

I forgot to mention on my original post that i do appreciate 3 major things when using an free software project.

First is the UX, if something is toooooo difficult for me to use it, i’ll pass it. Even if it is the best project ever. Wallabag isnt top notch on UX, but the design isnt destructive at all when reading an offline article. The work that nicosomb have made on that is really nice.

Second thing the update process: If is too hard for me to update a project, soon i will be bored to do it. I am an intermediate linux user and an open source advocate but i am lazy. Too lazy. Wallabag is super easy to update. Just download and extract. I am amazed that this process isnt already inside wallabag config section. I hope to see that in the next release. But it’s really nice to be notified (internal checks when using config page) and do the hard work of opening a shell, login, download and extract the new release :P

Third thing in my forgot list is support. Wallabag is active and has a new support process. Something that not many opensource projects have. And Nicola (core developer) isnt a hard man to find on social media. That’s always something useful and handy for small things but a known fact that the developer is not MIA.

Finally i choose to support projects via donations. My donates are always smalls - cause i dont have (yet) millions to spare. But even a small contribution from many people can manage to pay for the VPS or other costs that the developer have to pay from his pocket.

Tag(s): wallabag

October 15, 2014 09:28 AM

Google Blog

Android: Be together. Not the same.

Good things happen when everybody’s invited. A few years ago, we had the thought that phones (and stuff that hadn’t even been invented yet like tablets and smart watches) would be way more interesting if everyone could build new things together. So we created Android as an open platform, and put it out there for everyone to imagine, invent, make, or buy whatever they wanted.

Since then, all kinds of people—from companies big and small to folks on Kickstarter, kids in schools, and crazy smart developers—have been innovating faster, together, more than we ever could alone. And the best part is that every time someone new joins in, things get more interesting, unexpected, and wonderful for all of us.

Getting everyone in on the party is the same spirit behind Android One—an effort recently launched in India (coming to other countries soon) to make great smartphones available to the billions of people around the world who aren’t yet online. It’s also why we’re excited about Lollipop, our newest software release, which is designed to meet the diverse needs of the billion-plus people who already use Android today.

Joining the party: Android 5.0 Lollipop
As previewed at Google I/O, Lollipop is our largest, most ambitious release on Android with over 5,000 new APIs for developers. Lollipop is designed to be flexible, to work on all your devices and to be customized for you the way you see fit. And just like Android has always been, it’s designed to be shared.

Lollipop is made for a world where moving throughout the day means interacting with a bunch of different screens—from phones and tablets to TVs. With more devices connecting together, your expectation is that things just work. With Lollipop, it’s easier than ever to pick up where you left off, so the songs, photos, apps, and even recent searches from one of your Android devices can be immediately enjoyed across all the other ones.

As you switch from one screen to another, the experience should feel the same. So Lollipop has a consistent design across devices—an approach we call Material Design. Now content responds to your touch, or even your voice, in more intuitive ways, and transitions between tasks are more fluid.

Lollipop also gives you more control over your device. You can now adjust your settings so that only certain people and notifications can get through, for example, when you’re out to dinner or in the middle of an important meeting. And when an important notification does come through, you can see it directly from the lockscreen.

And because we’re using our devices a lot more, there’s a new battery saver feature that extends the life of your device by up to 90 minutes—helpful if you’re far from a power outlet. We’ve enabled multiple user accounts and guest user mode for keeping your personal stuff private. And you can now secure your device with a PIN, password, pattern, or even by pairing your phone to a trusted device like your watch or car with Smart Lock. But this is just a small taste of Lollipop. Learn more on

Meet the Nexus family, now running Lollipop
Advances in computing are driven at the intersection of hardware and software. That's why we’ve always introduced Nexus devices alongside our platform releases. Rather than creating software in the abstract, we work with hardware partners to build Nexus devices to help push the boundaries of what's possible. Nexus devices also serve as a reference for the ecosystem as they develop on our newest release. And for Lollipop, we have a few new Nexus treats to share with you.
First, with Motorola, we developed the Nexus 6. This new phone has a contoured aluminum frame, a 6-inch Quad HD display and a 13 megapixel camera. The large screen is complemented by dual front-facing stereo speakers that deliver high-fidelity sound, making it as great for movies and gaming as it is for doing work. It also comes with a Turbo Charger, so you can get up to six hours of use with only 15 minutes of charge.

Next, a new tablet built in partnership with HTC. Nexus 9, with brushed metal sides and 8.9-inch screen, is small enough to easily carry around in one hand, yet big enough to work on. And since more and more people want to have the same simple experience they have on their tablets when they have to do real work, we designed a keyboard folio that magnetically attaches to the Nexus 9, folds into two different angles and rests securely on your lap like a laptop.

Finally, we’re releasing the first device running Android TV: Nexus Player, a collaboration with Asus, is a streaming media player for movies, music and videos. It's also a first-of-its-kind Android gaming device. With Nexus Player you can play Android games on your HDTV with a gamepad, then keep playing on your phone while you're on the road. Nexus Player is Google Cast Ready so you can cast your favorite entertainment from almost any Chromebook or Android or iOS phone or tablet to your TV.

Nexus 9 and Nexus Player will be available for pre-order on October 17. Nexus 9 will be in stores starting November 3. Nexus 6 will be available for pre-order in late October and in stores in November—with options for an unlocked version through Play store, or a monthly contract or installment plan through carriers, including AT&T, Sprint, T-Mobile, U.S. Cellular, and Verizon. Specific carrier rollout timing will vary. Check out for more details on availability.

Android 5.0 Lollipop, which comes on Nexus 6, Nexus 9 and Nexus Player, will also be available on Nexus 4, 5, 7, 10 and Google Play edition devices in the coming weeks.

The party’s just getting started
With this latest release of Android Lollipop, we're excited to continue working with our developer community, hardware partners, and all of you. More ideas and more creators is what gets us all to better ideas faster. And since everyone's invited to the party, we hope you'll join in the fun by creating and sharing an Android character that captures a little bit of who you are—one of a kind. Enjoy!

by Emily Wood ( at October 15, 2014 10:06 AM


Freexian’s second report about Debian Long Term Support

Like last month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September 2014, 3 contributors have been paid for 11h each. Here are their individual reports:

Evolution of the situation

Compared to last month, we have gained 5 new sponsors, that’s great. We’re now at almost 25% of a full-time position. But we’re not done yet. We believe that we would need at least twice as many sponsored hours to do a reasonable work with at least the most used packages, and possibly four times as much to be able to cover the full archive.

We’re now at 39 packages that need an update in Squeeze (+9 compared to last month), and the contributors paid by Freexian did handle 11 during last month (this gives an approximate rate of 3 hours per update, CVE triage included).

Open questions

Dear readers, what can we do to convince more companies to join the effort?

The list of sponsors contains almost exclusively companies from Europe. It’s true that Freexian’s offer is in Euro but the economy is world-wide and it’s common to have international invoices. When Ivan Kohler asked if having an offer in dollar would help convince other companies, we got zero feedback.

What are the main obstacles that you face when you try to convince your managers to get the company to contribute?

By the way, we prefer that companies take small sponsorship commitments that they can afford over multiple years over granting lots of money now and then not being able to afford it for another year.

Thanks to our sponsors

Let me thank our main sponsors:

by Raphaël Hertzog at October 15, 2014 07:45 AM

Chris Siebenmann

Why system administrators hate security researchers every so often

So in the wake of the Bash vulnerability I was reading this Errata Security entry on Bash's code (via due to an @0xabad1dea retweet) and I came across this:

So now that we know what's wrong, how do we fix it? The answer is to clean up the technical debt, to go through the code and make systematic changes to bring it up to 2014 standards.

This will fix a lot of bugs, but it will break existing shell-scripts that depend upon those bugs. That's not a problem -- that's what upping the major version number is for. [...]

I cannot put this gently, so here it goes: FAIL.

The likely effect of any significant amount of observable Bash behavior changes (for behavior that is not itself a security bug) will be to leave security people feeling smug and the problem completely unsolved. Sure, the resulting Bash will be more secure. A powered off computer in a vault is more secure too. What it is not is useful, and the exact same thing is true of cavalierly breaking things in the name of security.

Bash's current behavior is relied on by a great many scripts written by a great many people. If you change any significant observable part of that behavior, so that scripts start breaking, you have broken the overall system that Bash is a part of. Your change is not useful. It doesn't matter if you change Bash's version number because changing the version number does nothing to magically fix those broken scripts.

Fortunately (for sysadmins), the Bash maintainers are extremely unlikely to take changes that will cause significant breakage in scripts. Even if the Bash maintainers take them, many distribution maintainers will not take them. In fact the distributions who are most likely to not take the fixes are the distributions that most need them, ie the distributions that have Bash as /bin/sh and thus where the breakage will cause the most pain (and Bashisms in such scripts are not necessarily bugs). Hence such a version of Bash, if one is ever developed by someone, is highly likely to leave security researchers feeling smug about having fixed the problem even if people are too obstinate to pick up their fix and to leave systems no more secure than before.

But then, this is no surprise. Security researchers have been ignoring the human side of their nominal field for a long time.

(As always, social problems are the real problems. If your proposed technical solution to a security issue is not feasible in practice, you have not actually fixed the problem. As a corollary, calling for such fixes is much the same as hoping magical elves will fix the problem.)

by cks at October 15, 2014 05:13 AM

October 14, 2014

Ubuntu Geek

VirtualBox 4.3.18 released and ubuntu installation instructions included

VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2.
Read the rest of VirtualBox 4.3.18 released and ubuntu installation instructions included (502 words)

© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to
Post tags: , ,

Related posts

by ruchi at October 14, 2014 11:25 PM

Aaron Johnson

Day 4: Glaciers and Lagoons: Iceland

Day 4 of the trip started early (again) with the two year old up and at’em at 0600. We got to go outside and watch the sunrise (remember that the sun doesn’t really “rise” until about 8am in October in Iceland) and play in the snow / ice, which was pretty awesome if you happened to be 4 and 2 years old. View from the hill a couple hundred yards away from the house looked like this:

I decided that I’m coming back to Iceland to hunt with Robert (our host) in 10 years, he’s a guide up in the western fjords.

Got a longer bit of driving in today, we drove through the misty fields of moss covered rock that looked like something out of a Dr. Seuss film:

and read about the mist hardships, which not only dramatically impacted Iceland but caused mayhem all over Europe and likely contributed to the French Revolution. A bit later we came across a waterfall with a geocache… or at least it was supposed to have a geocache but after a bit of scrambling through barbed wire fences and then underneath a giant boulder and finally up a hill we didn’t find the geocache, which was most likely removed by the property owners on either side of the waterfall that don’t like geocachers and as such, most likely hate babies, baseball and America. We (Beck, Kai and I) did have a great time though and hey, awesome waterfall:

30 seconds down the road we visited Dverghamrar (a set of basalt rock columns) which immediately turned into a rock climbing session for all involved. Beck made it up to the top or at least very close to the top of the rock formation, followed closely by his little brother who wouldn’t have had it any other way.

There was a geocache here which we “logged” but it was one of those educational ones that you have to answer questions for and feels more like homework than actual fun. We’ll count it though. Lunch was at a picnic table overlooking the rock formation (peanut butter and jelly and cucumbers on the side) and then we started the big drive for the day, which had scenes like this:

Big drive ended at Skaftafell National Park, where we decided I would go in to the visitors center and ask for good things for kids to do. I desperately wanted to step foot on a glacier just to say that we had done so and asked the receptionist if we could walk on a glacier somewhere. She said “no, not without a guide”, very firmly. I then asked if I could just put ONE foot on a glacier without a guide to which she again said “no” without any hesitation. I figured out (I’m pretty smart) that I wasn’t going to set foot on a glacier but proceeded to drive the short distance that she had pointed out to where we could get close to the glaciers and then we hiked a bit to see said glaciers and OH MY GOODNESS are they huge and I now understand why she said we couldn’t set foot on one, at least here. They were AWE inspiring and if I ever make it back, I’m definitely getting a guide and doing the crampon / hike on them thing. They look amazing:

But the day wasn’t over yet! Next on the list was the glacier lagoon which sounded interesting but we hadn’t done much research on it and didn’t book any of the boat trips not knowing if they’d allow 2 year olds so we arrived and saw this HUGE lake full of icebergs and a bunch of bright yellow and white amphibious vehicles that we immediately bought tickets to go on and hopped on one just five minutes after getting out of the car:

Boys had a pretty good time, we got to “eat” iceberg ice (which was probably the young kid highlight of the day) and then spent the rest of the time looking for porpoises and seals, which apparently inhabit the lagoon to no avail, but then the icebergs were pretty cool too:

After all that, we still had more awesome left in us and not coincidentally, needed dinner. We had recommendations to eat lobster in a town called Hofn and ended up at a place called Kaffi Hornið, where Karen and I split 400g of small lobster tails which were succulent and almost sweet and disappeared very quickly off the plate as did a couple of Einstök White Ales. We then hightailed it to the hotel for the night and crashed, everyone being super tired.

  • Times eating ice and snow: 2
  • Geocaches: 2
  • Waterfalls: 1
  • Hot chocolate: yes!
  • Massive glaciers that blow your mind: more than 5, lost count

by ajohnson at October 14, 2014 08:35 PM

Day 3: Southern Iceland

Oldest kid woke up happy and not barfing. We had a successful breakfast with a family from Canada who thought our little troop of boys was amazing, the dad at the other table ended up bring Reed a pancake and some milk because Reed asked him for more food. They didn’t have to travel with us in the car though.

We got out on the road by 9ish again and our first stop was to pet some Icelandic horses that were close to the road. They tried to eat my hand but I wouldn’t let them. Also, Mommy and Daddy had a “discussion” about parking on the shoulder and the appropriate angle that a car could be parked without the car tipping over. Needless to say, the car did not tip over.

Second stop was for gas, which was super confusing (in Iceland they do weird things like a) not using English on the terminal, b) requiring that you specify how much money you want to spend up front rather than letting you choose to fill it up and c) don’t have manned pumps, which is nice if it’s the middle of the night but not so good when you just want to fill up during the easy and talk to someone about how things work) and eventually led to a call from my bank, who were wondering why I all of a sudden got multiple authorizations for money from a gas station in the middle of nowhere in Iceland (I’d say the fraud protection that kicked in there was spot on). Mommy also purchased donuts at the gas station, which looked amazing but apparently had cardamom, which made them taste a bit off.

Third stop was a set of waterfalls (Seljalandsfoss), one of which you could climb behind, which we proceeded to do and got not soaked but not exactly dry either, everyone had fun here:

Next, we stumbled on to the Eyjafjallajökull Visitor Centre (which Karen had seen a movie about on the plane on the way in), which was created by a farmer and his wife after their farm was almost ruined by the Eyjafjallajökull volcano in 2010. Said visitors center was really well done and we saw a great movie about the volcano and how it affected their farm and family.

Back in the car again (you’ll notice a trend)… we drove to a great waterfall called Skógafoss, which we all climbed up (kind of, Reed was on my back) and then lucked upon a geocache and took a bunch of pictures from up on top.

We had lunch at a picnic table near the parking lot and then drove about 30 seconds over to the Skógasafn (Skógar Museum), which is an Icelandic folk / history museum which had a bunch of really great stuff including a stuffed two headed lamb:

some great skeletons and a museum reception desk guy that was a bird watcher (with a bicep tattoo that he had to show us) who played guitar in a heavy metal band who just about did a backflip when we told him we lived in Portland. He had been there and played at the Roseland theater a couple years ago. He was super awesome and reminded us that there was an old plane that had crash landed many years ago (1973) that we could drive out to (details). Beck and I had seen pictures of this plane when doing research on Iceland (it makes for some iconic photographs) and so we found it on the GPS and drove the couple miles of gravel road / black sand out to find it and lucked up on a geocache at the same time. Everyone had a great time banging on the old plane.

If you ever find yourself randomly driving around the entire island of Iceland, you should definitely make it a point to drive out to this plane. :)

We were stretching it on the “kids in the car all day” front at this point and so the iPad came out and Karen and I got to tromp around Dyrhólaeyjarviti, which is a great point that looks out over black sand beaches:

and has some clever looking arches:

Finally, we drove to our “hotel”, which turned out to be a converted house that was run by a fantastic man named Robert, who was managing it for the winter for his friends. We didn’t know that he could have made us dinner and so got dinner at some horrible fast food joint before arriving (because who knows if the “hotel” was going to have food?) but he turned out to have studied with the top wild game chef in Iceland and after we put our dudes down for the night, got to watch him put together a splendid dinner for the one other couple that was staying at the house. We had a great night chatting with Robert and the other couple (also from Canada, apparently it was Canadian Thanksgiving weekend) and then we hit the sack.


  • Geocaches: 2
  • Waterfalls: 1
  • Ice cream: negative
  • “Discussions”: 1
  • Bird watching, heavy metal guitarists who love Portland Oregon that man reception desks at small Icelandic folk museums : 1

by ajohnson at October 14, 2014 07:10 PM

Everything Sysadmin

Come hear me speak in Denver next week!

On Tuesday, Oct 21st, I'll be speaking at the Denver DevOps Meetup. It is short notice, but if you happen to be in the area, please come! I'll be talking about the new book and how DevOps principles can make the world a better place. I'll have a copy or two to give away, and special discount codes for everyone.

The meeting is at the Craftsy Offices, 999 18th St., Suite 240, Denver, CO. For more information and to RSVP, please go to

October 14, 2014 05:28 PM

Tutorial: How To Not Get Paged

Step 1: turn off your pager. Step 2: disable the monitoring system. Or.... you can run oncall using modern methodologies that constantly improve the reliability of your system.

I'm teaching a tutorial at Usenix LISA called "How To Not Get Paged: Managing Oncall to Reduce Outages".

I'm excited about this class because I'm going to explain a lot of the things I learned at Google about how to turn oncall from a PITA to a productive use of time that improves the reliability of the systems you run. Most of the material is from our new book, The Practice of Cloud System Administration, but the Q&A always leads me to say things I couldn't put in print.

Seating is limited. Register now!

How To Not Get Paged: Managing Oncall to Reduce Outages

Who should attend:

Anyone with an oncall responsibility (or their manager).


Tuesday, 11-Nov, 1:30pm-5pm at Usenix LISA


People think of "oncall" as responding to a pager that beeps because of an outage. In this class you will learn how to use oncall as a vehicle to improve system reliability so that you get paged less often.

Take back to work:

  • How to monitor more accurately so you get paged less
  • How to design an oncall schedule so that it is more fair and less stressful
  • How to assure preventative work and long-term solutions get done between oncall shifts
  • How to conduct "Fire Drills" and "Game Day Exercises" to create antifragile systems
  • How to write a good Post-mortem document that communicates better and prevents future problems

Topics include:

  • Why your monitoring strategy is broken and how to fix it
  • Building a more fair oncall schedule
  • Monitoring to detect outages vs. monitoring to improve reliability
  • Alert review strategies
  • Conducting "Fire Drills" and "Game Day Exercises"
  • "Blameless Post-mortem documents"

Register today for Usenix LISA 2014!

October 14, 2014 02:28 PM

Google Blog

OMG! Mobile voice survey reveals teens love to talk

“Ok Google, do I need an umbrella today?” “How tall do you need to be to ride the Cyclone?” “How long does a goat live?” People of all ages are starting to talk to their devices more regularly—in fact, our data also show mobile voice searches more than doubled in the past year. But how, why and where do people use voice search? To find out, we commissioned a study, conducted by Northstar Research, surveying 1,400 Americans across all age groups. Here are the results:

We weren’t surprised to find that teens—always ahead of the curve when it comes to new technology—talk to their phones more than the average adult. More than half of teens (13-18) use voice search daily—to them it’s as natural as checking social media or taking selfies. Adults are also getting the hang of it, with 41 percent talking to their phones every day and 56 percent admitting it makes them “feel tech savvy.”
Both teens and adults are asking their phones for directions and using it to help skip the hassle of typing in phone numbers. And it’s pretty clear a lot of people are relying on voice search for multitasking: they talk to their phones while watching TV (38 percent) and 23 percent of adults use voice search while cooking. Apparently, it’s becoming common kitchen etiquette to ask your mobile device: “Ok Google, how many ounces are in a gallon?”—all while making sure your screen stays crumb-free.
While people of all ages ask practical questions, it’s teens who are exploring all angles, with nearly one-third talking to their phones to get help with homework. I see my kids asking my phone questions like: “Ok Google, who was the sixth president of the U.S.?” or “what’s the tallest mountain in Europe?” On the bright side, teens are ditching voice search in the classroom: 74 percent of them think using voice search at school is unacceptable. In fact, most admit to using voice search “just for fun”—I know my daughter finds it pretty amusing to tell her phone: “Ok Google, play Olivia Holt’s ‘Fearless’” to start a dance party.
And teens don’t seem to associate any stigma with using voice search while hanging out with friends, whereas only one-quarter of adults speak to their phones when in the company of others. Teens don’t mind talking to devices in private as well, with more than one in five admitting to using voice search while in the bathroom! Maybe they’re merely setting reminders like “Ok Google, remind mom to buy toilet paper next time we’re at Safeway.”
Though it’s already helping a lot of people save time and simplify their days, there’s also potential for voice search to do a lot more in the future. So we asked people what they wished voice search could one day deliver. And I have to say, I agree with the results! it would be great to rely on my voice to easily find my car keys or TV remote, both of which somehow always end up under the couch cushions.

And I’m not alone in wishing a simple voice command could save me from having to cook dinner. Forty-five percent of teens—and 36 percent of adults!—wish they could place a pizza delivery order using voice search on their mobile device. We’re not quite there yet, but next time you don’t feel like cooking, just pull out your phone, tap the Google app, and say: “Ok Google, call Round Table Pizza.” You’ll still have to place your order over the phone, but we’re getting closer!

The small print: The study was commissioned by Google and executed by Northstar Research, a global consulting firm. It examined the smartphone voice search habits of 1,400 Americans, 13 years of age and older (400 ages 13-17 and 1,000 adults ages 18+). Voice search is part of the Google app (available on iOS and Android) and is the best way to access Google for helpful assistance throughout your day. Learn more about the Google app.

by Emily Wood ( at October 14, 2014 03:16 PM

Yellow Bricks

VMworld attendees: Give Back!

Always wanted to give back to the community but can’t find the time? How about VMware help you with that? If you are attending VMworld Europe you can simply do this by going to the hangspace and throw a paper airplane as far as you can. The VMware Foundation did the same thing in the US and it was a big success, I am hoping we can repeat that! The amount that will be donated will depend on where your airplane lands. It could be as little as 15 dollars, but also easily a 1000. I just went to the hangspace and threw a paper airplane  with my friend Jessica from the VMware Foundation and and former colleague but now EMC, Paul Manning.

Take the time, go to the hangspace… it literally only takes 2 minutes and give back!

"VMworld attendees: Give Back!" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at October 14, 2014 01:42 PM

EVO:RAIL now also available through HP and HDS!

It is going to be a really short blog post, but as many folks have asked about this I figured it was worth a quick article. In the VMworld Europe 2014 keynote Pat Gelsinger today announced that both HP and Hitachi Data Systems have joined the VMware EVO:RAIL program. This is great news if you ask me for customers all throughout the world as it provides more options for procuring an EVO:RAIL hyper-converged infrastructure appliance through your preferred server vendor!

The family is growing: Dell, Fujitsy, Hitachi Data Systems, HP, Inspur, NetOne Systems and SuperMicro… who would you like to see next?

"EVO:RAIL now also available through HP and HDS!" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at October 14, 2014 09:58 AM

Evaggelos Balaskas

Fairphone update #1

If you missed my previous blog post about fairphone click here: here.

this blog post document how to became root and do “advanced” staff.


Fairphone comes with an iFixit app - and of course with some other apps too ;)
If you want to remove it, you can simply connect your phone with your linux box, open USB debugging and adb shell through your phone

Fairphone is already rooted, so when you connect to it via adb, simply type:


to became root.


# adb shell
shell@android:/ $ su
shell@android:/ # 

You can do what-ever you like - but be careful with it !

Next, remount your system partition to be read-write:

# mount -o rw,remount /system 

and then simply remove the app you dont need:

# rm /system/app/FairPhoneIFixIt.apk

(you can alternative use an App-Remove application - but this is more fun, right ?)

and now to the more interesting thing:


How to add busybox to your Fairphone.

You need to download the busybox-armv7l from here

and use adb to push it to your phone:

adb push busybox-armv7l /sdcard/

after that, connect via adb shell, become root, open system to read-write and

cp /sdcard/busybox-armv7l /system/bin/

Fairphone comes with toolbox
There are a few commands point to toolbox:

cat chmod chown cmp cp date dd df dmesg du getevent getprop grep hd id ifconfig iftop insmod ioctl ionice kill ln log ls lsmod lsof md5 mkdir mount mv nandread netstat newfs_msdos notify printenv ps reboot renice rm rmdir rmmod route schedtop sendevent setconsole setprop sleep smd start stop sync top touch umount uptime vmstat watchprops wipe

but busybox has move power:

[, [[, acpid, add-shell, addgroup, adduser, adjtimex, arp, arping, ash,
awk, base64, basename, beep, blkid, blockdev, bootchartd, brctl,
bunzip2, bzcat, bzip2, cal, cat, catv, chat, chattr, chgrp, chmod,
chown, chpasswd, chpst, chroot, chrt, chvt, cksum, clear, cmp, comm,
conspy, cp, cpio, crond, crontab, cryptpw, cttyhack, cut, date, dc, dd,
deallocvt, delgroup, deluser, depmod, devmem, df, dhcprelay, diff,
dirname, dmesg, dnsd, dnsdomainname, dos2unix, du, dumpkmap,
dumpleases, echo, ed, egrep, eject, env, envdir, envuidgid, ether-wake,
expand, expr, fakeidentd, false, fbset, fbsplash, fdflush, fdformat,
fdisk, fgconsole, fgrep, find, findfs, flock, fold, free, freeramdisk,
fsck, fsck.minix, fsync, ftpd, ftpget, ftpput, fuser, getopt, getty,
grep, groups, gunzip, gzip, halt, hd, hdparm, head, hexdump, hostid,
hostname, httpd, hush, hwclock, id, ifconfig, ifdown, ifenslave,
ifplugd, ifup, inetd, init, insmod, install, ionice, iostat, ip,
ipaddr, ipcalc, ipcrm, ipcs, iplink, iproute, iprule, iptunnel,
kbd_mode, kill, killall, killall5, klogd, last, less, linux32, linux64,
linuxrc, ln, loadfont, loadkmap, logger, login, logname, logread,
losetup, lpd, lpq, lpr, ls, lsattr, lsmod, lsof, lspci, lsusb, lzcat,
lzma, lzop, lzopcat, makedevs, makemime, man, md5sum, mdev, mesg,
microcom, mkdir, mkdosfs, mke2fs, mkfifo, mkfs.ext2, mkfs.minix,
mkfs.vfat, mknod, mkpasswd, mkswap, mktemp, modinfo, modprobe, more,
mount, mountpoint, mpstat, mt, mv, nameif, nanddump, nandwrite,
nbd-client, nc, netstat, nice, nmeter, nohup, nslookup, ntpd, od,
openvt, passwd, patch, pgrep, pidof, ping, ping6, pipe_progress,
pivot_root, pkill, pmap, popmaildir, poweroff, powertop, printenv,
printf, ps, pscan, pstree, pwd, pwdx, raidautorun, rdate, rdev,
readahead, readlink, readprofile, realpath, reboot, reformime,
remove-shell, renice, reset, resize, rev, rm, rmdir, rmmod, route, rpm,
rpm2cpio, rtcwake, run-parts, runlevel, runsv, runsvdir, rx, script,
scriptreplay, sed, sendmail, seq, setarch, setconsole, setfont,
setkeycodes, setlogcons, setserial, setsid, setuidgid, sh, sha1sum,
sha256sum, sha3sum, sha512sum, showkey, slattach, sleep, smemcap,
softlimit, sort, split, start-stop-daemon, stat, strings, stty, su,
sulogin, sum, sv, svlogd, swapoff, swapon, switch_root, sync, sysctl,
syslogd, tac, tail, tar, tcpsvd, tee, telnet, telnetd, test, tftp,
tftpd, time, timeout, top, touch, tr, traceroute, traceroute6, true,
tty, ttysize, tunctl, udhcpc, udhcpd, udpsvd, umount, uname, unexpand,
uniq, unix2dos, unlzma, unlzop, unxz, unzip, uptime, users, usleep,
uudecode, uuencode, vconfig, vi, vlock, volname, wall, watch, watchdog,
wc, wget, which, who, whoami, whois, xargs, xz, xzcat, yes, zcat, zcip

to add a new command to your fairphone just link it against busybox:

shell@android:/system/bin # ln -s busybox vi 

from here … you can do pretty much whatever you like !.

Tag(s): fairphone

October 14, 2014 09:28 AM

Administered by Joe. Content copyright by their respective authors.