Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

August 02, 2014

Chris Siebenmann

The benchmarking problems with potentially too-smart SSDs

We've reached the point in building out our new fileservers and iSCSI backends where we're building the one SSD-based fileserver and its backends. Naturally we want to see what sort of IO performance we get on SSDs, partly to make sure that everything is okay, so I fired up my standard basic testing tool for sequential IO. It gave me some numbers, the numbers looked good (in fact pretty excellent), and then I unfortunately started thinking about the fact that we're doing this with SSDs.

Testing basic IO speed on spinning rust is relatively easy because spinning rust is in a sense relatively simple and predictable. Oh, sure, you have different zones and remapped sectors and so on, but you can be all but sure that when you write arbitrary data to disk that it is actually going all the way down to the platters unaltered (well, unless your filesystem does something excessively clever). This matters for my testing because my usual source of test data to write to disk is /dev/zero, and data from /dev/zero is what you could call 'embarrassingly compressible' (and easily deduplicated too).

The thing is, SSDs are not spinning rust and thus are nowhere near as predictable. SSDs contain a lot of magic, and increasingly some of that magic apparently involves internal compression on the data you feed them. When I was writing lots of zeros to the SSDs and then reading them back, was I actually testing the SSD read and write speeds or was I actually testing how fast the embedded processors in the SSDs could recognize zero blocks and recreate them in RAM?

(What matters to our users is the real IO speeds, because they are not likely to read and write zeros.)

Once you start going down the road of increasingly smart devices, the creeping madness starts rolling in remarkably fast. I started out thinking that I could generate a relatively small block of random data (say 4K or something reasonable) and repeatedly write that. But wait, SSDs actually use much larger internal block sizes and they may compress over that larger block size (which would contain several identical copies of my 4K 'simple' block). So I increased the randomness block size to 128K, but now I'm worrying about internal SSD deduplication since I'm writing a lot of copies of this.

The short version of my conclusion is that once I start down this road the only sensible approach is to generate fully random data. But if I'm testing high speed IO in an environment of SSDs and multiple 10G iSCSI networks, I need to generate this random data at a pretty high speed in order to be sure it's not the potential rate limiting step.

(By the way, /dev/urandom may be a good and easy source of random data but it is very much not a high speed source. In fact it's an amazingly slow source, especially on Linux. This was why my initial approach was basically 'read N bytes from /dev/urandom and then repeatedly write them out'.)

PS: I know that I'm ignoring all sorts of things that might affect SSD write speeds over time. Right now I'm assuming that they're going to be relatively immaterial in our environment for hand-waving reasons, including that we can't do anything about them. Of course it's possible that SSDs detect you writing large blocks of zeros and treat them as the equivalent of TRIM commands, but who knows.

by cks at August 02, 2014 04:09 AM

August 01, 2014

apt-get

My Free Software Activities in July 2014

This is my monthly summary of my free software related activities. If you’re among the people who made a donation to support my work (548.59 €, thanks everybody!), then you can learn how I spent your money. Otherwise it’s just an interesting status update on my various projects.

Distro Tracker

Now that tracker.debian.org is live, people reported bugs (on the new tracker.debian.org pseudo-package that I requested) faster than I could fix them. Still I spent many, many hours on this project, reviewing submitted patches (thanks to Christophe Siraut, Joseph Herlant, Dimitri John Ledkov, Vincent Bernat, James McCoy, Andrew Starr-Bochicchio who all submitted some patches!), fixing bugs, making sure the code works with Django 1.7, and started the same with Python 3.

I added a tox.ini so that I can easily run the test suite in all 4 supported environments (created by tox as virtualenv with the combinations of Django 1.6/1.7 and Python 2.7/3.4).

Over the month, the git repository has seen 73 commits, we fixed 16 bugs and other issues that were only reported over IRC in #debian-qa. With the help of Enrico Zini and Martin Zobel, we enabled the possibility to login via sso.debian.org (Debian’s official SSO) so that Debian developers don’t even have to explicitly create their account.

As usual more help is needed and I’ll gladly answer your questions and review your patches.

Misc packaging work

Publican. I pushed a new upstream release of publican and dropped a useless build-dependency that was plagued by a difficult to fix RC bug (#749357 for the curious, I tried to investigate but it needs major work for make 4.x compatibility).

GNOME 3.12. With gnome-shell 3.12 hitting unstable, I had to update gnome-shell-timer (and filed an upstream ticket at the same time), a GNOME Shell extension to start some run-down counters.

Django 1.7. I packaged python-django 1.7 release candidate 1 in experimental (found a small bug, submitted a ticket with a patch that got quickly merged) and filed 85 bugs against all the reverse dependencies to ask their maintainers to test their package with Django 1.7 (that we want to upload before the freeze obviously). We identified a pain point in upgrade for packages using South and tried to discuss it with upstream, but after closer investigation, none of the packages are really affected. But the problem can hit administrators of non-packaged Django applications.

Misc stuff. I filed a few bugs (#754282 against git-import-orig –uscan, #756319 against wnpp to see if someone would be willing to package loomio), reviewed an updated package for django-ratelimit in #755611, made a non-maintainer upload of mairix (without prior notice) to update the package to a new upstream release and bring it to modern packaging norms (Mako failed to make an upload in 4 years so I just went ahead and did what I would have done if it were mine).

Kali work resulting in Debian contributions

Kali wants to switch from being based on stable to being based on testing so I did try to setup britney to manage a new kali-rolling repository and encountered some problems that I reported to debian-release. Niels Thykier has been very helpful and even managed to improve britney thanks to the very specific problem that the kali setup triggered.

Since we use reprepro, I did write some Python wrapper to transform the HeidiResult file in a set of reprepro commands but at the same time I filed #756399 to request proper support of heidi files in reprepro. While analyzing britney’s excuses file, I also noticed that the Kali mirrors contains many source packages that are useless because they only concern architectures that we don’t host (and I filed #756523 filed against reprepro). While trying to build a live image of kali-rolling, I noticed that libdb5.1 and db5.1-util were still marked as priority standard when in fact Debian already switched to db5.3 and thus should only be optional (I filed #756623 against ftp.debian.org).

When doing some upgrade tests from kali (wheezy based) to kali-rolling (jessie based) I noticed some problems that were also affecting Debian Jessie. I filed #756629 against libfile-fcntllock-perl (with a patch), and also #756618 against texlive-base (missing Replaces header). I also pinged Colin Watson on #734946 because I got a spurious base-passwd prompt during upgrade (that was triggered because schroot copied my unstable’s /etc/passwd file in the kali chroot and the package noticed a difference on the shell of all system users).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

by Raphaël Hertzog at August 01, 2014 09:13 PM

Google Blog

Through the Google lens: search trends July 25-31

The dog days of summer are upon us (just look at the last time we posted on this here blog), but there’s still plenty of excitement to keep search buzzing. From big baseball news to blockbusters, here’s a look at the last seven days in search:

Baseball bombshells
This season’s MLB trade deadline was yesterday, and as news of surprise trades emerged, people were quick to catch the latest via search. More than half of the day’s hot trends were baseball-related, and searches for the blogs [mlb trade rumors] reached their highest volume all year. From the three-way trade that landed David Price in Detroit and Austin Jackson in Seattle to the A's-Red Sox swap of Jon Lester and Yoenis Céspedes, it was quite the active Deadline Day.
Searchnado
Thanks in large part to nerdfest Comic-Con, it was a week of sneak peeks for movie fans. First up: the reboot of Mad Max. The long-awaited remake of the Australian classic premiered its trailer at the conference last weekend, and fans got their first real glimpse of Bane Tom Hardy as Max. Meanwhile, a sneak peek of Israeli actress Gal Gadot as Wonder Woman in the forthcoming Batman v. Superman: Dawn of Justice made waves. And a new teaser trailer for Hunger Games: Mockingjay - Part 1 gave us a new glimpse at Katniss as the face of the rebellion.

Though neither film will be in theaters for some time, fans are keeping busy in the meantime: searches for new releases Guardians of the Galaxy, Lucy and Hercules were all high on the charts. And those who prefer their movies with a hefty serving of camp to go with their popcorn had more than enough to satisfy them with Sharknado 2: The Second One. Searches for [sharknado 2 trailer] were up 95 percent over the past month, and the movie was one of the top topics Wednesday when it premiered on Syfy. Simply stunning. Finally, the trailer for the movie version of Stephen Sondheim’s Into the Woods had musical theater lovers ready for more.

Searching for symptoms
An ebola outbreak in West Africa has people concerned about the spread of the deadly epidemic. Searches for [ebola] are at their highest ever, up 2,000%, and related searches like [ebola in nigeria], [ebola symptoms] and [what is ebola] grew too. Worldwide, Liberia had the highest search volume of any country.

Tip of the week
Next time you’re invited to a summer barbecue, let Google help you remember to pick up a snack or six-pack to contribute to the fiesta. On your Android or iPhone, just say “Ok Google, remind me to buy a watermelon when I’m at Safeway.” Next time you’re near the store, you’ll get a prompt. No more showing up with empty hands!
Posted by Emily Wood, Google Blog Editor, who searched this week for [beyonce surfboard] and [arcade fire tour costumes]

by Emily Wood (noreply@blogger.com) at August 01, 2014 04:49 PM

Chris Siebenmann

The temptation to rebuild my office machine with its data in ZFS on Linux

A while back, I added a scratch drive to my office workstation to have more space for relatively unimportant data, primarily ISOs for virtual machine installs and low priority virtual machine images and extra disks (which I used to test, eg, virtual fileservers and iSCSI backends). About a month ago I gave into temptation and rebuilt that disk space as a ZFS pool using ZFS on Linux, fundamentally because I've come to really like ZFS in general and I really wanted to try out ZFS on Linux for something real. The whole experience has gone well so far; it wasn't particularly difficult to build DKMS based RPMs from the development repositories, everything has worked fine, and even upgrading kernels has been completely transparent to date.

(The advantage of building for DKMS is that the ZFS modules get automatically rebuilt when I install Fedora kernel upgrades.)

All of this smooth sailing has been steadily ratcheting up the temptation to go whole hog on ZFS on Linux by rebuilding my machine to have all of my user data in a ZFS pool instead of in the current 'filesystems on LVM on software RAID-1' setup (I'd leave the root filesystem and swap as they are now). One of the reasons this is extra tempting is that I actually have an easy path to it because my office workstation only has two main disks (and can easily fit another two temporarily).

I've been thinking about this for a while and so I've come up with a bunch of reasons why this is not quite as crazy as it sounds:

  • given that we have plenty of spare disks and I can put an extra two in my case temporarily, moving to ZFS is basically only a matter of copying a lot of data to new disks. It would be tedious but not particularly adventurous.

  • I have automated daily backups of almost all of my machine (and I could make that all of my machine) and I don't keep anything really crucial on it, like say my email (all of that lives on our servers). If ZFS totally implodes and eats my data on the machine, I can restore from them.

  • I can simply save my current pair of disks to have a fairly fast and easy way to migrate back from ZFS. If it doesn't work out, I can just stick them back in the machine and basically rsync my changes back to my original filesystems. Again, tedious but not adventurous.

  • My machine has 16GB of RAM and I mostly don't use much of it, which means that I think I'm relatively unlikely to run into what I consider the major potential problem with ZoL. And if I'm wrong, well, migrating back will merely be tedious instead of actively painful.

Set against all of this 'not as crazy as it looks' is that this would be daring and there wouldn't be any greater reason beyond just having my data in ZFS on my local machine. We're extremely unlikely to have any use for ZoL in production for, well, a long time into the future.

(And I probably wouldn't be building up enough confidence with ZoL to do this to my home machine, because because most of these reasons don't apply there. I have no spare data disks, no daily backups, and no space for extra drives in my case at home. On the attractive side, a ZFS L2ARC would neatly solve my SSD dilemma and my case does have room for one.)

One of the reasons that this is quite daring is the possibility that at some point I wouldn't be able to apply Fedora kernel updates or upgrade to a new Fedora version because ZoL doesn't support the newer kernels yet. Since I also use VMWare I already sort of live with this possibility today, but VMWare would be a lot easier to do without for some noticeable amount of time than, well, all of my data.

(The real answer is that that sort of thing would force me to migrate back to my current setup, either with the rsync approach or just by throwing another two new disks in and copying things from scratch.)

Still, the temptation is there and it keeps getting stronger the longer that my current ZoL pool is trouble free (and every time I run 'zpool scrub' on it and admire the assurance of no errors being detected).

(Having written this, I suspect that I'll give in in a week or two.)

PS: I don't think I currently have any software that requires filesystem features that ZoL doesn't currently implement, like POSIX ACLs. If I'm wrong about that a move to ZFS would be an immediate failure. I admit that I deliberately put a VMWare virtual machine into my current ZFS pool because I half expected VMWare to choke on ZFS (it hasn't so far).

PPS: I have no particular interest in going all in on ZoL and putting my root filesystem into ZFS, for various reasons. I like having the root filesystem in a really simple setup that needs as few moving parts as possible, so today it's not even in LVM, just on a software RAID-1 partition (the same is true for swap). Flexible space management is really overkill because the size of modern disks means that it's easy to give the root filesystem way more space than it will ever need. I do crazy things like save a copy of every RPM I've ever installed or updated through yum and my work root filesystem (/var included) is still under 60 GB.

by cks at August 01, 2014 03:24 AM

July 31, 2014

Ubuntu Geek

Setup FTP server using VsFtp and Configure Secure FTP connections (Using TLS/SSL) on Ubuntu 14.04 Server

Sponsored Link
vsftpd is a GPL licensed FTP server for UNIX systems, including Linux. It is secure and extremely fast. It is stable. Don't take my word for it, though. Below, we will see evidence supporting all three assertions. We will also see a list of a few important sites which are happily using vsftpd. This demonstrates vsftpd is a mature and trusted solution.
(...)
Read the rest of Setup FTP server using VsFtp and Configure Secure FTP connections (Using TLS/SSL) on Ubuntu 14.04 Server (522 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , , ,

Related posts

by ruchi at July 31, 2014 11:49 PM

August 01, 2014

Giri Mandalika

Programming in C: Few Tidbits #2

(1) ceil() returns an incorrect value?

ceil() rounds the argument upward to the nearest integer value in floating-point format. For example, calling ceil() with an argument (2/3) should return 1.


printf("\nceil(2/3) = %f", ceil(2/3));

results in:


ceil(2/3) = 0.000000

.. which is not the expected result.

However:


printf("\nceil((float)2/3) = %f", ceil((float)2/3));

shows the expected result.


ceil((float)2/3) = 1.000000

The reason for the incorrect result in the first attempt can be attributed to the integer division. Since both operands in the division operation are integers, it resulted in an integer division which discarded the fractional part.

Desired result can be achieved by casting one of the operands to float or double as shown in the subsequent attempt.

One final example for the sake of completeness.


printf("\nceil(2/(float)3) = %f", ceil(2/(float)2));
..
ceil(2/(float)3) = 1.000000

(2) Main difference between abort() and exit() calls

On a very high level: abort() sends SIGABRT signal causing abnormal termination of the target process without calling functions registered with atexit() handler, and results in a core dump. Some cleanup activity may happen.

exit() causes normal process termination after executing functions registered with the atexit() handler, and after performing cleanup activity such as flushing and closing all open streams.

If it is desirable to bypass atexit() registered routine(s) during a process termination, one way is to call _exit() rather than exit().

Of course, this is all high level and the information provided here is incomplete. Please check relevant man pages for detailed information.


(3) Current timestamp

The following sample code shows the current timestamp in two different formats. Check relevant man pages for more information.


#include <time.h>
..
char timestamp[80];
time_t now;
struct tm *curtime;

now = time(NULL);
curtime = localtime(&now);

strftime(timestamp, sizeof(timestamp), "%m-%d-%Y %X", curtime);

printf("\ncurrent time: %s", timestamp);
printf("\ncurrent time in a different format: %s", asctime(curtime));
..

Executing this code shows output


current time: 07-31-2014 22:05:42
current time in a different format: Thu Jul 31 22:05:42 2014

by Giri Mandalika (noreply@blogger.com) at August 01, 2014 12:13 AM

July 31, 2014

Yellow Bricks

Win a free ebook copy of Essential Virtual SAN?


A couple of weeks ago the electronic version of Essential Virtual SAN was published and this week the first paper copies are shipping! Because of that Cormac and I decided we will give away 4 ebooks each. If you want to win one then please let us know why you feel you deserve to win a copy using the hashtag #essentialvirtualsan on twitter. Cormac and I will decide which 8 tweets will win an ebook, and of course we will favour the ones that make us laugh :)

So lets be clear:

  • tweet why you think you deserve the book
  • use the hashtag #essentialvirtualsan

The 8 winners will be announced Friday 8th of August.

 

"Win a free ebook copy of Essential Virtual SAN?" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at July 31, 2014 05:27 PM

Paper copy of Essential Virtual SAN available as of today!


3 weeks ago I announced the availability of the ebook of “Essential Virtual SAN”. Today I have the pleasure to inform you that the paper copy has also hit the streets and is being shipped by Amazon as of today. So for those who were waiting with ordering until the paper version was available… Go here, and order it today, and have it in house by tomorrow! The book covers the architecture of Virtual SAN, operational and architectural gotchas and sizing guidance, design examples and much more. Just pick it up,

"Paper copy of Essential Virtual SAN available as of today!" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at July 31, 2014 07:10 AM

Chris Siebenmann

Why I like ZFS in general

Over time I've come around to really liking ZFS, not just in the context of our fileservers and our experiences with them but in general. There are two reasons for this.

The first is that when left to myself the data storage model I gravitate to is a changeable collection of filesystems without permanently fixed sizes that are layered on top of a chunk of mirrored storage. I believe I've been doing this since before I ran into ZFS because it's just the right and simple way: I don't have to try to predict how many filesystems I need in advance or how big they all have to be, and managing my mirrored storage in one big chunk instead of a chunk per filesystem is just easier. ZFS is far from the only implementation of this abstract model but it's an extremely simple and easy to use take on it, probably about as simple to use as you can get. And it's one set of software to deal with the whole stack of operations, instead of two or three.

(In Linux, for example, I do this with filesystems in LVM on top of software RAID mirrors. Each of these bits works well but there are three different sets of software involved and any number of multi-step operations to, say, start using larger replacement disks.)

The second is that as time goes by I've become increasingly concerned about both the possibility of quiet bitrot in data that I personally care about and the possibility of losing all of my data from a combination of a drive failure plus bad spots on the remaining drive. Noticing quiet bitrot takes filesystem checksums; recovering from various failure modes takes deep hooks into the RAID layer. ZFS has both and thus deals solidly with both failure possibilities, which is quite reassuring.

(ZFS also has its own fragilities, but let's pretend that software is perfect for the moment. And any theoretical fragilities have not bit me yet.)

As far as I know there is no practical competitor for ZFS in this space today (for simple single-machine setups), especially if you require it to be free or open source. The closest is btrfs but I've come to think that btrfs is doing it wrong on top of its immaturity.

by cks at July 31, 2014 03:28 AM

Standalone Sysadmin

Xenophobia Revisited

Way back in 2010, I wrote an article called Xenophobia and Elitism in the Community. The article talked mostly about xenophobia and elitism in the community through a technical lens - about the tendencies that we all sometimes get toward mocking some technology or technique, and treating people who practice using it as less intelligent than ourselves, or less worthy than ourselves. It's not a bad read. You might want to check it out.

A recent post on Reddit, titled I've gone off the deep end, brought that same feeling of revulsion back to me in full force. It's rare to see a display of open racism so blatant - regardless of what the original poster calls it. Here's the text, and you should be forewarned - this will probably upset you if you're easily upset. It pissed me off, for what that's worth:

Did any of you see the video on the front page of reddit a few days ago of a saudi guy beating the shit out of an Indian with a belt? It was pretty horrible.

However it made me laugh, and for that I'm ashamed. Working in IT I've developed a deep hatred for Indians, and I'm not even a bad or racist person. I'm just so sick of them as a group thinking they know so much more than everyone else when they often know nothing, and that damn accent, and the weird sense of superiority.

My initial thought was "I wish I could do that to the useless indian IT guys at my company" when I saw that saudi kid swinging the belt and beating him senseless.
I never used to be like that.

"I'm not even a bad or racist person"

I don't even really know how to properly respond to that, when it's immediately followed by "I'm just so sick of them as a group thinking they know so much more than everyone else when they often know nothing, and that damn accent, and the weird sense of superiority".

I'm not going to say, "this should make you mad", but if it doesn't, you might want to think about why it doesn't, and maybe reconsider some things.

What is just as depressing to me is the sheer number of comments of supporters in that thread. People who take the side of the person who wrote that he wishes he could take a belt and beat people in his company senseless.

Look, people can be pretty horrible. People can be monsters, and people can be awful to their fellow people, but I can't sit by and not comment on this blatant ...racist...xenophobic....asshole behavior. These aren't the words of someone who is frustrated. These are the words of someone who really needs to step back and understand that he is everything he claims to not be. He's racist and I don't know whether he's a bad person, but he definitely has some anger issues he should deal with.

Racism is real, and it's more than just "That person is Indian so I'm biased", it's deeply cultural, and it might be deeply biological, but regardless of where the xenophobic, racist distrust originates, we need to see it and we need to understand that it is a bias, and to compensate for it.

The temple of Apollo at Delphi, in ancient Greece, had a stone carved with the words, "γνῶθι σεαυτόν". We're more familiar with the Latin translation, "temet nosce". It means "know thyself", or literally, "get to know yourself", and that's the advice I would give to everyone, because it's something that I struggle with, too, but I work toward because it's important. It might be the most important thing that any of us can do.

We each see the world through a series of lenses handed to us by our parents, our teachers, our religious leaders, and others who we encounter through life. What we end up with is a very customized version of the world that is unique to each individual, and your experience on this Earth are very different than mine, so we see different things. If we're ever to truly understand each other, and work together effectively, I need to understand that my vision is impacted by my experiences, and I need to account for that, and you need to do the same. I can't do that unless I frankly consider how I look at things, and why, and neither can you.

If you talk with another admin or a support person, and you hear the words, "Do the needful", or "please advise", and you feel something negative, ask yourself why that is. Is a negative reaction helping your predicament? Are you better off for automatically having that kind of response? If not, work toward correcting for the response, however you can make that happen. Maybe you need to identify where in your life you got that reaction and isolate what makes you respond that way, so that you can separate that from your current experience. Or maybe you should just use the phrase occasionally yourself. Taken literally, there's nothing wrong with the phrase, assuming both sides of the conversation know what needs to be done.

In the end, I would just encourage you to 'temet nosce', and to work toward a better version of you in the future. I'm trying, and I know that it's an uphill battle, but I think it's worth fighting for. And hopefully you do, too.

by Matt Simmons at July 31, 2014 02:00 AM

July 30, 2014

Chris Siebenmann

My view on FreeBSD versus Linux, primarily on the desktop

Today I wound up saying on Twitter:

@thatcks: @devbeard I have a pile of reasons I'm not enthused about FreeBSD, especially as a desktop with X and so on. So that's not really an option.

I got asked about this on Twitter and since my views do not in any way fit into 140 characters, it's time for an entry.

I can split my views up into three broad categories: pragmatic, technical, and broadly cultural and social. The pragmatic reasons are the simplest ones and boil down to that Linux is the dominant open source Unix. People develop software for Linux first and everything else second, if at all. This is extremely visible for an X desktop (the X server and all modern desktops are developed and available first on Linux) but extends far beyond that; Go, for example, was first available on Linux and later ported to FreeBSD. Frankly I like having a wide selection of software that works without hassles and often comes pre-packaged, and generally not having to worry if something will run on my OS if it runs on Unix at all. FreeBSD may be more pure and minimal here but as I've put it before, I'm not a Unix purist. In short, running FreeBSD in general usage generally means taking on a certain amount of extra pain and doing without a certain amount of things.

On the technical side I feel that Linux and Linux distributions have made genuinely better choices in many areas, although I'm somewhat hampered by a lack of deep exposure to FreeBSD. For example, I would argue that modern .deb and RPM Linux package management is almost certainly significantly more advanced than FreeBSD ports. As another one, I happen to think that systemd is the best Unix init system currently available with a lot of things it really gets right, although it is not perfect. There are also a horde of packaging decisions like /etc/cron.d that matter to system administrators because they make our lives easier.

(And yes, FreeBSD has sometimes made better technical choices than Linux. I just think that there have been fewer of them.)

On the social and cultural side, well, I cannot put it nicely so I will put it bluntly: I have wound up feeling that FreeBSD is part of the conservative Unix axis that worships at the altar of UCB BSD, System V, and V7. This is not required by its niche as the non-Linux Unix but that situation certainly doesn't hurt; a non-Linux Unix is naturally attractive to people who don't like Linux's reinvention, ahistoricality, and brash cultural attitudes. I am not fond of this conservatism because I strongly believe that Unix needs to grow and change and that this necessarily requires experimentation, a willingness to have failed experiments, and above all a willingness to change.

This is a somewhat complex thing because I don't object to a Unix being slow moving. There is certainly a useful ecological niche for a cautious Unix that lets other people play pioneer and then adopts the ideas that have proven to be good ones (and Linux's willingness to adopt new things creates churn; just ask all of the people who ported their init scripts to Upstart and will now be re-porting them to systemd). If I was confident that FreeBSD was just waiting to adopt the good bits, that would be one thing. But as an outsider I haven't been left with that feeling; instead my brushing contacts have left me with more the view that FreeBSD has an aspect of dogmatic, 'this is how UCB BSD does it' conservatism to it. Part of this is based on FreeBSD still not adopting good ideas that are by now solidly proven (such as, well, /etc/cron.d as one small example).

This is also the area where my cultural bad blood with FreeBSD comes into play. Among other more direct things, I'm probably somewhat biased towards seeing FreeBSD as more conservative than it actually is and I likely don't give FreeBSD the benefit of the doubt when it does something (or doesn't do something) that I think of as hidebound.

None of this makes FreeBSD a bad Unix. Let me say it plainly: FreeBSD is a perfectly acceptable Unix in general. It is just not a Unix that I feel any particular enthusiasm for and thus not something I'm inclined to use without a compelling reason. My default Unix today is Linux.

(It would take a compelling reason to move me to FreeBSD instead of merely somewhere where FreeBSD is a bit better because of the costs of inevitable differences.)

by cks at July 30, 2014 05:32 AM

July 29, 2014

Ubuntu Geek

How to configure NFS Server and Client Configuration on Ubuntu 14.04

NFS was developed at a time when we weren't able to share our drives like we are able to today -- in the Windows environment. It offers the ability to share the hard disk space of a big server with many smaller clients. Again, this is a client/server environment. While this seems like a standard service to offer, it was not always like this. In the past, clients and servers were unable to share their disk space.

(...)
Read the rest of How to configure NFS Server and Client Configuration on Ubuntu 14.04 (626 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Related posts

by ruchi at July 29, 2014 11:42 PM

Everything Sysadmin

Honest Life Hacks

I usually don't blog about "funny web pages" I found but this is relevant to the blog. People often forward me these "amazing life hacks that will blow your mind" articles because of the Time Management Book.

First of all, I shouldn't have to tell you that these are linkbait (warning: autoplay).

Secondly, here's a great response to all of these: Honest Life Hacks.

July 29, 2014 03:28 PM

The Nubby Admin

I Don’t Always Mistype My Password

It never fails. At the last two or three characters of a password, I’ll strike two keys at the same time. Maybe I pressed both, maybe I only pressed one. Whatever happened, certainly it doesn’t make any sense to press return and roll the dice. I’ll either get in or fail and have to type it all over again, and sysadmins can’t admit failure nor can we entertain the possibility of failing!!

So what’s the most logical maneuver? Backspace 400 times to make sure you got it all and try over. Make sure you fumble the password a few more time in a row so you can get a nice smooth spot worn on your backspace key.

Advertisement:

by WesleyDavid at July 29, 2014 11:12 AM

Chris Siebenmann

FreeBSD, cultural bad blood, and me

I set out to write a reasoned, rational elaboration of a tweet of mine, but in the course of writing it I've realized that I have some of those sticky human emotions involved too, much like my situation with Python 3. What it amounts to is that in addition to my rational reasons I have some cultural bad blood with FreeBSD.

It certainly used to be the case that a vocal segment of *BSD people, FreeBSD people among them, were elitists who looked down their noses at Linux (and sometimes other Unixes too, although that was usually quieter). They would say that Linux was not a Unix. They would say that Linux was clearly used by people who didn't know any better or who didn't have any taste. There was a fashion for denigrating Linux developers (especially kernel developers) as incompetents who didn't know anything. And so on; if you were around at the right time you probably can think of other things. In general these people seemed to venerated the holy way of UCB BSD and find little or no fault in it. Often these people believed (and propagated) other Unix mythology as well.

(This sense of offended superiority is in no way unique to *BSD people, of course. Variants have happened all through computing's history, generally from the losing side of whatever shift is going on at the time. The *BSD attitude of 'I can't believe so many people use this stupid Linux crud' echoes the Lisp reaction to Unix and the Unix reaction to Windows and Macs, at least (and the reaction of some fans of various commercial Unixes to today's free Unixes).)

This whole attitude irritated me for various reasons and made me roll my eyes extensively; to put it one way, it struck me as more religious than well informed and balanced. To this day I cannot completely detach my reaction to FreeBSD from my association of it with a league of virtual greybeards who are overly and often ignorantly attached to romantic visions of the perfection of UCB BSD et al. FreeBSD is a perfectly fine operating system and there is nothing wrong with it, but I wish it had kept different company in the late 1990s and early to mid 00s. Even today there is a part of me that doesn't want to use 'their' operating system because some of the company I'd be keeping would irritate me.

(FreeBSD keeping different company was probably impossible, though, because of where the Unix community went.)

(I date this *BSD elitist attitude only through the mid 00s because of my perception that it's mostly died down since then. Hopefully this is an accurate perception and not due to selective 'news' sources.)

by cks at July 29, 2014 03:33 AM

July 28, 2014

Standalone Sysadmin

Kerbal Space System Administration

I came to an interesting cross-pollination of ideas yesterday while talking to my wife about what I'd been doing lately, and I thought you might find it interesting too.

I've been spending some time lately playing video games. In particular, I'm especially fond of Kerbal Space Program, a space simulation game where you play the role of spaceflight director of Kerbin, a planet populated by small, green, mostly dumb (but courageous) people known as Kerbals.

Initially the game was a pure sandbox, as in, "You're in a planetary system. Here are some parts. Go knock yourself out", but recent additions to the game include a career mode in which you explore the star system and collect "science" points for doing sensor scans, taking surface samples, and so on. It adds a nice "reason" to go do things, and I've been working on building out more efficient ways to collect science and get it back to Kerbin.

Part of the problem is that when you use your sensors, whether they detect gravity, temperature, or materials science, you often lose a large percentage of the data when you transmit it back, rather than deliver it in ships - and delivering things in ships is expensive.

There is an advanced science lab called the MPL-LG-2 which allows greater fidelity in transmitted data, so my recent work in the game has been to build science ships which consist of a "mothership" with a lab, and a smaller lightweight lander craft which can go around whatever body I'm orbiting and collect data to bring to the mothership. It's working pretty well.

At the same time, I'm working on building out a collectd infrastructure that can talk to my graphite installation. It's not as easy as I'd like because we're standardized on Ubuntu Precise, which only has collectd 4.x, and the write_graphite plugin began with collectd 5.1.

To give you background, collectd is a daemon that runs and collects information, usually from the local machine, but there are an array of plugins to collect data from any number of local or remote sources. You configure collectd to collect data, and you use a write_* plugin to get that data to somewhere that can do something with it.

It was in the middle of explaining these two things - KSP's science missions and collectd - that I saw the amusing parity between them. In essence, I'm deploying science ships around my infrastructure to make it easier to get science back to my central repository so that I advance my own technology. I really like how analogous they are.

I talked about doing the collectd work on twitter, and Martijn Heemels expressed interest in what I was doing, since he would also like write_graphite on Precise, so I figured that other people probably might want to get in on the action, so to speak. I could give you the package I made, or I could show you how I made it. That sounds more fun.

Like all good things, this project involves software from Jordan Sissel - namely fpm, effing package management. Ever had to make packages and deal with spec files, control files, or esoteric rulesets that made you go into therapy? Not anymore!

So first we need to install it, which is easy, because it's a gem:


$ sudo gem install fpm

Now, lets make a place to stage files before they get packaged:

$ mkdir ~/collectd-package

And grab the source tarball and untar it:

$ wget https://collectd.org/files/collectd-5.4.1.tar.gz
$ tar zxvf collectd-5.4.1.tar.gz
$ cd collectd-5.4.1/

(if you're reading this, make sure to go to collectd.org and get the new one, not the version I have listed here.)

Configure the Makefile, just like you did when you were a kid:

$ ./configure --enable-debug --enable-static --with-perl-bindings="PREFIX=/usr"

Hat tip to Mike Julian who let me know that you can't actually enable debugging in the collectd tool unless you actually use the flag here, so save yourself some heartbreak by turning that on. Also, if I'm going to be shipping this around, I want to make sure that it's compiled statically, and for whatever reason, I found that the perl bindings were sad unless I added that flag.

Now we compile:

$ make

Now we "install":

make DESTDIR="/home/YOURUSERNAME/collectd-package" install

I've found that the install script is very grumpy about relative directory names, so I appeased it by giving it the full path to where the things would be dropped (the directory we created earlier)

We're going to be using a slightly customized init script. I took this from the version that comes with the precise 4.x collectd installation and added a prefix variable that can be changed. We didn't change the installation directories above, so by default, everything is going to eventually wind up in /opt/collectd/ and the init script needs to know about that:

$ cd ~
$ mkdir -p collectd-package/etc/init.d/
$ wget --no-check-certificate -O collectd-package/etc/init.d/collectd http://bit.ly/1mUaB7G
$ chmod +x collectd-package/etc/init.d/collectd

This is pulling in the file from this gist.

Now, we're finally ready to create the package:

fakeroot fpm -t deb -C collectd-package/ --name collectd \
--version 5.4.1 --iteration 1 --depends libltdl7 -s dir opt/ usr/ etc/

Since you may not be familiar with fpm, some of the options are obvious, but for the ones that aren't, -C changes directory to the given argument, --version is the version of the software, as opposed to --iteration is the version of the package. If you package this, deploy it, then find a bug in the packaging, when you package it again after fixing the problem, you increment the iteration flag, and your package management can treat it as an upgrade. The --depends is a library that collectd needs on the end systems. -s sets the source type to "directory", and then we give it a list of directories to include (remembering that we've changed directories with the -C flag).

Also, this was my first foray into the world of fakeroot, which you should probably read about if you run Debian-based systems.

At this point, in the current directory, there should be "collectd_5.4.1-1.deb", a package file that works for installing using 'dpkg -i' or in a PPA or in a repo, if you have one of those.

Once collectd is installed, you'll probably want to configure it to talk to your graphite host. Just edit the config in /opt/collectd/etc/collectd.conf. Make sure to uncomment the write_graphite plugin line, and change the write_graphite section. Here's mine:


  
    Host "YOURGRAPHITESERVER"
    Port "2003"
    Protocol "tcp"
    LogSendErrors true
    # remember the trailing period in prefix
    #    otherwise you get CCIS.systems.linuxTHISHOSTNAME
    #    You'll probably want to change it anyway, because 
    #    this one is mine. ;-) 
    Prefix "CCIS.systems.linux."
    StoreRates true
    AlwaysAppendDS false
    EscapeCharacter "_"
  

Anyway, hopefully this helped you in some way. Building a puppet module is left as an exercise to the reader. I think I could do a simplistic one in about 5 minutes, but as soon as you want to intelligently decide which modules to enable and configure, then it gets significantly harder. Hey, knock yourself out! (and let me know if you come up with anything cool!)

by Matt Simmons at July 28, 2014 10:22 AM

Chris Siebenmann

Go is still a young language

Once upon a time, young languages showed their youth by having core incapabilities (important features not implemented, important platforms not supported, or the like). This is no longer really the case today; now languages generally show their youth through limitations in their standard library. The reality is that a standard library that deals with the world of the modern Internet is both a lot of work and the expression of a lot of (painful) experience with corner cases, how specifications work out in practice, and so on. This means that such a library takes time, time to write everything and then time to find all of the corner cases. When (and while) the language is young, its standard library will inevitably have omissions, partial implementations, and rough corners.

Go is a young language. Go 1.0 was only released two years ago, which is not really much time as these things go. It's unsurprising that even today portions of the standard library are under active development (I mostly notice the net packages because that's what I primarily use) and keep gaining additional important features in successive Go releases.

Because I've come around to this view, I now mostly don't get irritated when I run across deficiencies in the corners of Go's standard packages. Such deficiencies are the inevitable consequence of using a young language, and while they're obvious to me that's because I'm immersed in the particular area that exposes them. I can't expect authors of standard libraries to know everything or to put their package to the same use that I am time. And time will cure most injuries here.

(Sometimes the omissions are deliberate and done for good reason, or so I've read. I'm not going to cite my primary example yet until I've done some more research about its state.)

This does mean that development in Go can sometimes require a certain sort of self-sufficiency and willingness to either go diving into the source of standard packages or deliberately find packages that duplicate the functionality you need but without the limitations you're running into. Some times this may mean duplicating some amount of functionality yourself, even if it seems annoying to have to do it at the time.

(Not mentioning specific issues in, say, the net packages is entirely deliberate. This entry is a general thought, not a gripe session. In fact I've deliberately written this entry as a note to myself instead of writing another irritated grump, because the world does not particularly need another irritated grump about an obscure corner of any standard Go package.)

by cks at July 28, 2014 03:07 AM

July 27, 2014

Ubuntu Geek

VirtualBox 4.3.14 released and ubuntu installation instructions included

VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2.
(...)
Read the rest of VirtualBox 4.3.14 released and ubuntu installation instructions included (577 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at July 27, 2014 11:32 PM

Rands in Repose

The internet is still at the beginning of its beginning

From Kevin Kelly on Medium:

But, but…here is the thing. In terms of the internet, nothing has happened yet. The internet is still at the beginning of its beginning. If we could climb into a time machine and journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2044 were not invented until after 2014. People in the future will look at their holodecks, and wearable virtual reality contact lenses, and downloadable avatars, and AI interfaces, and say, oh, you didn’t really have the internet (or whatever they’ll call it) back then.

Yesterday was a study in contrasts. I was cleaning up part of the garage in the morning and found a box full of CD cases containing classic video games: Quake, Baldur’s Gate II, Riven, and Alice. Later in the day, I also had the opportunity to play the open Destiny Beta for a few hours.

Anyone who believes that we’re not a part of a wide open frontier only need look at what was considered state of the art just a few years ago.

#

by rands at July 27, 2014 04:28 PM

Server Density

Chris Siebenmann

Save your test scripts and other test materials

Back in 2009 I tested ssh cipher speeds (although it later turned out to be somewhat incomplete. Recently I redid those tests on OmniOS, with some interesting results. I was able to do this (and do it easily) because I originally did something that I don't do often enough: I saved the script I used to run the tests for my original entry. I didn't save full information, though; I didn't save information on exactly how I ran it (and there's several options). I can guess a bit but I can't be completely sure.

I should do this more often. Saving test scripts and test material has two uses. First, you can go back later and repeat the tests in new environments and so on. This is not just an issue of getting comparison data, it's also an issue of getting interesting data. If the tests were interesting enough to run once in one environment they're probably going to be interesting in another environment later. Making it easy or trivial to test the new environment makes it more likely that you will. Would I have bothered to do these SSH speed tests on OmniOS and CentOS 7 if I hadn't had my test script sitting around? Probably not, and that means I'd have missed learning several things.

The second use is that saving all of this test information means that you can go back to your old test results with a lot more understanding of what they mean. It's one thing to know that I got network speeds of X Mbytes/sec between two systems, but there are a lot of potential variables in that simple number. Recording the details will give me (and other people) as many of those variables as possible later on, which means we'll understand a lot more about what the simple raw number means. One obvious aspect of this understanding is being able to fully compare a number today with a number from the past.

(This is an aspect of scripts capturing knowledge, of course. But note that test scripts by themselves don't necessarily tell you all the details unless you put a lot of 'how we ran this' documentation into their comments. This is probably a good idea, since it captures all of this stuff in one place.)

by cks at July 27, 2014 04:50 AM


Administered by Joe. Content copyright by their respective authors.