Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

February 12, 2016

Simplehelp

How to Install Beta Apps on Android

This tutorial will take you each step of the way through the process that allows you to install the beta versions of your favorite Apps on your Android phone or tablet.

The first thing you’ll need to do is allow your Android device to install software from “Unknown Sources”. To do so, start by tapping the Settings button.

the android Settings button

Then select Lock Screen and security

Locate the section titled Unknown sources and switch the toggle to the ON position. Tap OK when prompted for confirmation.

Now head over to APK Mirror to locate the beta version of the App you want to install. Download the file and transfer it to your Android device as you would any other.

Locate the file using your Android File Manager and then tap on it to install.

Now you’ll go through the installation process as you would with any other App from the Google Play Store. That’s it!

by Ross McKillop at February 12, 2016 08:03 PM

Everything Sysadmin

WebCast on how to "Fail Better": Friday, Feb 26

I'll be giving a talk "Fail Better: Radical Ideas from the Practice of Cloud Computing" as part of the ACM Learning Series at at 2pm EST on Friday, February 26, 2016. Pre-registration is required.

In this talk I explain 3 of the most important points from our newest book, The Practice of Cloud System Administration. The talk applies to everyone, whether or not you are "in the cloud".

"See" you there!

February 12, 2016 03:00 PM

February 11, 2016

Everything Sysadmin

How SysAdmins Devalue Themselves

I write a 3-times-a-year column in ACM Queue Magazine. This issue I cover 2 unrelated topics. "How Sysadmins Devalue Themselves" and "And how to track on-call coverage". Enjoy!

Q: Dear Tom, How can I devalue my work? Lately I've felt like everyone appreciates me, and, in fact, I'm overpaid and underutilized. Could you help me devalue myself at work?

A: Dear Reader, Absolutely! I know what a pain it is to lug home those big paychecks. It's so distracting to have people constantly patting you on the back. Ouch! Plus, popularity leads to dates with famous musicians and movie stars. (Just ask someone like Taylor Swift or Leonardo DiCaprio.) Who wants that kind of distraction when there's a perfectly good video game to be played?

Here are some time-tested techniques that everyone should know.

Click here to read the entire article...

Note: This article can be viewed for free, however I encourage you to subscribe to ACM Queue Magazine. ACM members can access it online for fee, or a small fee gets you access to it online or via an app.

February 11, 2016 03:00 PM

Server Density

Why we use NGINX

Wait, another article about NGINX?

You bet. Every single Server Density request goes through NGINX. NGINX is such an important part of our infrastructure. A single post couldn’t possibly do it justice. Want to know why we use NGINX? Keep reading.

Why is NGINX so important?

Because it’s part of the application routing fabric. Routing, of course, is a highly critical function because it enables load balancing. Load balancing is a key enabler of highly available systems. Running those systems requires having “one more of everything”: one more server, one more datacenter, one more zone, region, provider, et cetera. You just can’t have redundant systems without a load balancer to route requests between redundant units.

But why choose NGINX and not something else, say Pound?

We like Pound. It is easy to deploy and manage. There is nothing wrong with it. In fact it’s a great option, as long as your load balancing setup doesn’t have any special requirements.

Special requirements?

Well, in our case we wanted to handle WebSocket requests. We needed support for the faster SPDY and HTTP2 protocols. We also wanted to use the Tornado web framework. So, after spending some time with Pound, we eventually settled on NGINX open source.

NGINX is an event driven web server with inbuilt proxying and load balancing features. It has inbuilt native support for FCGI and uWSGI protocols. This allows us to run Python WSGI apps in fast application servers such as uWSGI.

NGINX also supports third party modules for full TCP/IP socket handling. This allows us to pick and mix between our asynchronous WebSocket Python apps and our upstream Node.js proxy.

What’s more, NGINX is fully deployable and configurable through Puppet.

So, how do you deploy NGINX? Do you use Puppet?

Indeed, we do. In fact we’ve been using Puppet to manage our infrastructure for several years now. We started with building manifests to match the setup of our old hosting environment at Terremark, so we could migrate to Softlayer.

In setting up Puppet, we needed to choose between writing our own manifest or reaching out to the community. As advocates of reusing existing components (versus building your own), a visit to the Forge was inevitable.

Why-we-use-NGINX-Forge

We run Puppet Enterprise using our own manifests (Puppet master pulls them from our Github repo). We also make intense use of Puppet Console and Live Management to trigger transient changes such as failover switches.

Before we continue, a brief note on Puppet Live Management.

Puppet Live Management: It’s Complicated Deprecated

Puppet Live management allows admins to change configurations on the fly, without changing any code. It also allows them to propagate those changes to a subset of servers. Live Management is a cool feature.

Alas, last year Puppet Labs deprecated this cool feature. Live Management doesn’t show up in the enterprise console any more. Thankfully, the feature is still there but we needed to use a configuration flag to unearth and activate it again (here is how).

Why-we-use-NGINX-Console

Our NGINX Module

Initially, we used the Puppetlabs nginx module. Unnerved by the lack of maintenance updates, though, we decided to fork James Fryman’s module (Puppet Labs’ NGINX module was a fork of the James Fryman’s module, anyway).

We started out by using a fork and adding extra bits of functionality as we went along. In time, as James Fryman’s module continued to evolve, we started using the (unforked) module directly.

Why we use NGINX with multiple load balancers

We used to run one big load balancer for everything. I.e. one load balancing server handling all services. In time, we realised this is not optimal. If requests for one particular service piled up, it would often cross-talk to other services and affect them too.

So we decided to go with many, smaller load balancer servers. Preferably one for each service.

All our load balancers have the same underlying class in common. Depending on the specific service they route, each load balancer class will have its own corresponding service configuration.

Trigger transient changes using the console

One of the best aspects of the Live Management approach is that it helps overcome the lack of control inherent in the NGINX open source interface (NGINX Plus offers more options).

Why-we-use-NGINX-Enterprise-Console

In order for NGINX configuration file changes to take effect, NGINX needs to reload. For example, to remove one node from the load balancer rotation, we would need to edit the corresponding configuration file and then trigger a NGINX reload.

Using the Puppet Console, we can change any node or group parameters and then trigger the node to run. Puppet will then reload NGINX. What’s cool is that the in flight connection is not terminated by the reload.

Summary

NGINX caters for a micro-service architecture, composed of small, independent processes. It can handle WebSockets and it supports SPDY. It’s also compatible with the Tornado web framework. Oh, and it was built around FCGI and uWSGI protocols.

Over the years, we’ve tried to fine-tune how we use Puppet to deploy NGINX. As part of that, we use Puppet Console and Live Management features quite extensively.

Is NGINX part of your infrastructure? Please comment with your own use case and suggestions, in the comments below.

The post Why we use NGINX appeared first on Server Density Blog.

by Pedro Pessoa at February 11, 2016 01:53 PM

Aaron Johnson

Links: 2-10-2016

by ajohnson at February 11, 2016 06:30 AM

February 10, 2016

Everything Sysadmin

A feast of analogies

A few years ago a coworker noticed that all my analogies seemed to involve food. He asked if this was intentional.

I explained to him that my analogies contain many unique layers, but if you pay attention you'll see a lot of repetition... like a lasagna.

By the way...

I've scheduled this blog post to appear on the morning of Wednesday, Feb 10. At that time I'll be getting gum surgery. As part of recovery I won't be able to bite into any food for 4-6 months. I'll have to chew with my back teeth only.

Remember, folks, brushing and flossing is important. Don't ignore your teeth. You'll regret it later.

February 10, 2016 03:30 PM

Yellow Bricks

What’s new for Virtual SAN 6.2?

Advertise here with BSA


Yes, finally… the Virtual SAN 6.2 release has just been announced. Needless to say, but I am very excited about this release. This is the release that I have personally been waiting for. Why? Well I think the list of new functionality will make that obvious.  There are a couple of clear themes in this release, and I think it is fair to say that data services / data efficiency is most important. Lets take a look at the list of what is new first and then discuss them one by one

  • Deduplication and Compression
  • RAID-5/6 (Erasure Coding)
  • Sparse Swap Files
  • Checksum / disk scrubbing
  • Quality of Service / Limits
  • In mem read caching
  • Integrated Performance Metrics
  • Enhanced Health Service
  • Application support

That is indeed a good list of new functionality, just 6 months after the previous release that brought you Stretched Clustering, 2 node Robo etc. I’ve already discussed some of these as part of the Beta announcements, but lets go over them one by one so we have all the details in one place. By the way, there also is an official VMware paper available here.

Deduplication and Compression has probably been the number one ask from customers when it comes to features requests for Virtual SAN since version 1.0. The Deduplication and Compression is a feature which can be enabled on an all-flash configuration only. Deduplication and Compression always go hand-in-hand and is enabled on a cluster level. Note that Deduplication and Compression are referred to as nearline dedupe / compression, which basically means that deduplication and compression happens during destaging from the caching tier to the deduplication tier.

VSAN 6.2

Now lets dig a bit deeper. More specifically, deduplication granularity is 4KB and will happen first and is then followed by an attempt to compress the unique block. This block will only be stored compressed when it can be compressed down to 2KB or smaller. The domain for deduplication is the disk group in each host. Of course the question then remains, what kind of space savings can be expected? It depends is the answer. In our environments, and our testing, have shown space savings between 2x and 7x. Where 7x arefull clone desktops (optimal situation) and 2x is a SQL database. Results in other words will depend on your workoad.

Next on the list is RAID-5/6 or Erasure Coding as it is also referred to. In the UI by the way, this is configurable through the VM Storage Policies and you do this through defining the “Fault Tolerance Method” (FTM). When you configure this you have two options: RAID-1 (Mirroring) and RAID-5/6 (Erasure Coding). Depending on how FTT (failures to tolerate) is configured when RAID-5/6 is selected you will end up with a 3+1 (RAID-5) configuration for FTT=1 and 4+2 for FTT=2.

VSAN RAID-6

Note that “3+1” means you will have 3 data blocks and 1 parity block, in the case of 4+2 this means 4 data blocks and 2 parity blocks. Note that again this functionality is only available for all-flash configurations. There is a huge benefit to using it by the way:

Lets take the example of a 100GB Disk:

  • 100GB disk with FTT =1 & FTM=RAID-1 set –> 200GB disk space needed
  • 100GB disk with FTT =1 & FTM=RAID-5/6 set –> 130.33GB disk space needed
  • 100GB disk with FTT =2 & FTM=RAID-1 set –> 300GB disk space needed
  • 100GB disk with FTT =2 & FTM=RAID-5/6 set –> 150GB disk space needed

As demonstrated, the space savings are enormous, especially with FTT=2 the 2x savings can and will make a big difference. Having that said, do note that the minimum number of hosts required also change. For RAID-5 this is 4 (remember 3+1) and 6 for RAID-6 (remember 4+2). The following two screenshots demonstrate how easy it is to configure it and what the layout looks of the data in the web client.


Sparse Swap Files is a new feature that can only be enabled by setting an advanced setting. It is one of those features that is a direct result of a customer feature request for cost optimization. As most of you hopefully know, when you create VM with 4GB of memory a 4GB swap file will be created on a datastore at the same time. This is to ensure memory pages can be assigned to that VM even when you are overcommitting and there is no physical memory available. With VSAN when this file is created it is created “thick” at 100% of the memory size. In other words, a 4GB swap file will take up 4GB which can’t be used by any other object/component on the VSAN datastore. When you have a handful of VMs there is nothing to worry about, but if you have thousands of VMs then this adds up quickly. By setting the advanced host setting “SwapThickProvisionedDisabled” the swap file will be provisioned thin and disk space will only be claimed when the swap file is consumed. Needless to say, but we only recommend using this when you are not overcommitting on memory. Having no space for swap and needed to write to swap wouldn’t make your workloads happy.

Next up is the Checksum / disk scrubbing functionality. As of VSAN 6.2 for every write (4KB) a checksum is calculated and stored separately from the data (5-byte). Note that this happens even before the write occurs to the caching tier so even an SSD corruption would not impact data integrity. On a read of course the checksum is validated and if there is a checksum error it will be corrected automatically. Also, in order to ensure that over time stale data does not decay in any shape or form, there is a disk scrubbing process which reads the blocks and corrects when needed. Intel crc32c is leveraged to optimize the checksum process. And note that it is enabled by default for ALL virtual machines as of this release, but if desired it can be disabled as well through policy for VMs which do not require this functionality.

Another big ask, primarily by service providers, was Quality of Service functionality. There are many aspects of QoS but one of the major asks was definitely the capability to limit VMs or Virtual Disks to a certain number of IOPS through policy. This simply to prevent a single VM from consuming all available resources of a host. One thing to note is that when you set a limit of 1000 IOPS VSAN uses a block size of 32KB by default. Meaning that when pushing 64KB writes the 1000 IOPS limits is actual 500. When you are doing 4KB writes (or reads for that matter) however, we still count with 32KB blocks as this is a normalized value. Keep this in mind when setting the limit.

When it comes to caching there was also a nice “little” enhancement. As of 6.2 VSAN also has a small in-memory read cache. Small in this case means 0.4% of a host’s memory capacity up to a max of 1GB. Note that this in-memory cache is a client side cache, meaning that the blocks of a VM are cached on the host where the VM is located.

Besides all these great performance and efficiency enhancements of course a lot of work has also been done around the operational aspects. As of VSAN 6.2 no longer do you as an admin need to dive in to the VSAN observer, but you can just open up the Web Client to see all performance statistics you want to see about VSAN. It provides a great level of detail ranging from how a cluster is behaving down to the individual disk. What I personally feel is very interesting about this performance monitoring solution is that all the data is stored on VSAN itself. When you enable the performance service you simply select the VSAN storage policy and you are set. All data is stored on VSAN and also all the calculations are done by your hosts. Yes indeed, a distributed and decentralized performance monitoring solution, where the Web Client is just showing the data it is provided.

Of course all new functionality, where applicable, has health check tests. This is one of those things that I got used to so fast, and already take for granted. The Health Check will make your life as an admin so much easier, not just the regular tests but also the pro-active tests which you can run whenever you desire.

Last but not least I want to call out the work that has been done around application support, I think especially the support for core SAP applications is something that stands out!

If you ask me, but of course I am heavily biased, this release is the best release so far and contains all the functionality many of you have been asking for. I hope that you are as excited about it as I am, and will consider VSAN for new projects or when current storage is about to be replaced.

"What’s new for Virtual SAN 6.2?" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

by Duncan Epping at February 10, 2016 01:23 PM

Canllaith.org

Sennheiser HD 202 II Professional Headphones Review

Are you among those certified music lovers? If yes, you have probably looking for top quality sound output devices that will bring you to a whole new world of sound-tripping. Perhaps, you have heard about the Sennheiser HD 202. This is one of the most sought after headsets today. If you want to know more […]

by Canllaith at February 10, 2016 05:27 AM

February 09, 2016

Trouble with tribbles

Building influxdb and grafana on Tribblix

For a while now I've been looking at alternative ways to visualize kstat data. Something beyond JKstat and KAR, at least.

An obvious thought is: there are many time-series databases being used for monitoring now, with a variety of user-configurable dashboards that can be used to query and display the data. Why not use one of those?

First the database. For this test I'm using InfluxDB, which is written, like many applications are these days, in Go. Fortunately, Go works fine on illumos and I package it for Tribblix, so it's fairly easy to follow the instructions, so make a working directory, cd there, and:

export GOPATH=`pwd`
go get github.com/influxdb/influxdb 
cd $GOPATH/src/github.com/influxdb/influxdb
go get -u -f -t ./...
go clean ./...
go install ./...
 
(Note that it's influxdb/influxdb, not influxdata/influxdb. The name was changed, but the source and the build still use the old name.)

That should just work, leaving you with binaries in $GOPATH/bin.

So then you'll want a visualization front end. Now, there is Chronograf. Unfortunately it's closed source (that's fine, companies can make whatever choices they like) which means I can't build it for Tribblix. The other obvious path is Grafana.

Building Grafana requires Go, which we've already got, and Node.js. Again, Tribblix has Node.js, so we're (almost) good to go.

Again, it's mostly a case of following the build instructions. For Grafana, this comes in 2 parts. The back-end is Go, so make a working directory, cd there, and:

export GOPATH=`pwd`
go
get github.com/grafana/grafana
cd $GOPATH/src/github.com/grafana/grafana
go run build
.go setup
$GOPATH
/bin/godep restore
go run build
.go build
 
You'll find the Grafana server in $GOPATH/src/github.com/grafana/grafana/bin/grafana-server

The front-end involves a little variation to get it to work properly. The problem here is that a basic 'npm install' will install both production and development dependencies. We don't actually want to do development of Grafana, which ultimately requires webkit and won't work anyway. So we really just want the production pieces, and we don't want to install anything globally. But we still need to run 'npm install' to start with, as otherwise the dependencies get messed up. Just ignore the errors and warnings around PhantomJS.

npm install
npm install --production
npm install
grunt-cli
./node_modules/.bin/grunt --force

With that, you can fire up influxd and grafana-server, and get them to talk to each other.

For the general aspects of getting Grafana and Influxdb to talk to each other, here's a tutorial I found useful.

Now, with all this in place, I can go back to playing with kstats.

by Peter Tribble (noreply@blogger.com) at February 09, 2016 07:57 PM

Server Density

Deploying nginx with Puppet

Back in 2013 we wrote about our experience deploying Nginx with Puppet and how we transitioned from Pound.

As you would expect, Puppet has evolved since then. In fact, deployments are becoming bigger and more intricate by the day. To cater for that complexity, it’s necessary to abstract as much of our automation code as possible, grouping it around modules and using inheritance on classes and nodes.

That is a good first step, but it may not suffice in scenarios where we need to reuse 3rd party modules or make our own modules reusable in other environments. This is what led to the creation of the role and profiles design pattern.

Nodes are assigned a role class, which is equivalent to a high level task, such as a web or mail server. Within these classes we have profile classes that correspond to: technology-specific configurations (for things like Nginx or Apache), monitoring, backups, et cetera. This helps keep things organized and therefore easier to manage.

Craig Dunn wrote first about roles and profiles and Adrien Thebo compared them to Legos.

In addition to new design patterns, new tools have also been introduced to further abstract configuration management. Hiera, for example, stores configuration data outside Puppet manifests, the code that holds configuration logic.

Puppet nginx role/Profiles

When people embark on their DevOps journey, having good end-to-end examples to refer to, is invaluable. What we wanted was a template of sorts. An example of how to deploy Nginx with Puppet, using roles and profiles, together with Hiera. We couldn’t find any that met those requirements, so we decided to write one.

We endeavoured to follow best practices where possible. For simplicity sake, we’ve skipped things like integrating a version control system, proper 3rd party modules lifecycle maintenance or using Puppet environments.

Requirements

We assume you have a working Puppet setup. Either a single standalone server where you can run puppet apply manifests/site.pp, or a full-blown Puppet master server  together with a server where you will deploy Nginx.

Using Puppet with Hiera to deploy Nginx

The following is an example deployment of Nginx (and PHP-FPM) on Ubuntu 14.04 or later, using Puppet3, Hiera, along with the role and profiles pattern we discussed earlier on. We will be using the popular jfryman/puppet-nginx and mayflower/puppet-php modules for that.

First step is to configure Hiera by configuring /etc/puppet/hiera.yaml:


File: hiera.yaml
----------------

---
:backends:
  - yaml
 
:hierarchy:
  - "nodes/%{::fqdn}"
  - "roles/%{::role}"
  - common
 
:yaml:
  :datadir: /etc/puppet/hiera/

:logging:
  - console

Hiera does a lookup of your YAML configuration under the datadir. It first searches for a file with the hostname (fqdn), then one with the assigned role, and then a common file for all your servers. Keep in mind that every time you edit this file (shouldn’t be too often) you have to restart your Puppet server (if running inside Passenger, just restart Apache: sudo service apache2 restart).

The hostname YAML file (in my case /etc/puppet/hiera/nodes/nodefqdn.yaml) is where the specifics of this host are stored:


File: nodefqdn.yaml
-------------------

---
roles:
   - roles::www

vdomain: example.com

nginx::config::vhost_purge: true
nginx::config::confd_purge: true

nginx::nginx_vhosts:
  "%{hiera('vdomain')}":
    ensure: present
    rewrite_www_to_non_www: true
    www_root: "/srv/www/%{hiera('vdomain')}/"
    try_files:
      - '$uri'
      - '$uri/'
      - '/index.php$is_args$args'

nginx::nginx_locations:
  'php':
    ensure: present
    vhost: example.com
    location: '~ .php$'
    www_root: "/srv/www/%{hiera('vdomain')}/"
    try_files:
      - '$uri'
      - '/index.php =404'
    location_cfg_append:
      fastcgi_split_path_info: '^(.+\.php)(.*)$'
      fastcgi_pass: 'php'
      fastcgi_index: 'index.php'
      fastcgi_param SCRIPT_FILENAME: "/srv/www/%{hiera('vdomain')}$fastcgi_script_name"
      include: 'fastcgi_params'
      fastcgi_param QUERY_STRING: '$query_string'
      fastcgi_param REQUEST_METHOD: '$request_method'
      fastcgi_param CONTENT_TYPE: '$content_type'
      fastcgi_param CONTENT_LENGTH: '$content_length'
      fastcgi_intercept_errors: 'on'
      fastcgi_ignore_client_abort: 'off'
      fastcgi_connect_timeout: '60'
      fastcgi_send_timeout: '180'
      fastcgi_read_timeout: '180'
      fastcgi_buffer_size: '128k'
      fastcgi_buffers: '4 256k'
      fastcgi_busy_buffers_size: '256k'
      fastcgi_temp_file_write_size: '256k'
    
  'server-status':
    ensure: present
    vhost: "%{hiera('vdomain')}"
    location: /server-status
    stub_status: true
    location_cfg_append:
      access_log: off
      allow: 127.0.0.1
      deny: all

serverdensity_agent::plugin::nginx::nginx_status_url: "http://%{hiera('vdomain')}/server-status"

nginx::nginx_upstreams:
  'php':
    ensure: present
    members:
      - unix:/var/run/php5-fpm.sock

php::fpm: true

php::fpm::settings:
  PHP/short_open_tag: 'On'

php::extensions:
    json: {}
    curl: {}
    mcrypt: {}

php::fpm::pools:
  'www':
    listen: unix:/var/run/php5-fpm.sock
    pm_status_path: /php-status

And to finish the Hiera part, we can add some configuration common to all servers, like the Server Density monitoring credentials, in the /etc/puppet/hiera/common.yaml file:


File: common.yaml
-----------------

serverdensity_agent::sd_account: bencer
serverdensity_agent::api_token: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

With the Hiera side of things complete, we can now configure the main Puppet manifest. This is how /etc/puppet/manifests/site.pp should look like:


File: site.pp
-------------

node default {
  hiera_include('roles')
}

This instructs Puppet to include the roles classes defined in the role section we added in the YAML file. We therefore don’t need all the node definitions in our .pp files code anymore (so things stay clean and awesome).

To make all this possible, we need to create the necessary classes for roles and profiles. The roles are created as an additional module under /etc/puppet/modules/roles. Within that folder we need to create another manifest folder with the role definition. In our case, it would be /etc/puppet/modules/roles/manifests/www.pp:


File: www.pp
------------

class roles::www {
  include profiles::nginx
  include profiles::php
  include profiles::serverdensity
  class { 'serverdensity_agent::plugin::nginx': }
}

This class includes the different profiles. Here it’s the Nginx profile, the PHP profile and the Server Density agent profile. In addition we call the class that installs the Server Density Nginx plugin.

Nginx monitoring
This is Server Density monitoring Nginx in a LEMP stack setup.
By the way, if you are looking for a hosted monitoring that integrates with Puppet, you should sign up for a 2-week trial of Server Density.

Then the profiles module needs to define these 3 previous profiles. What these profile classes do is call the 3rd party modules we are using, getting the configuration specifics from Hiera.

The Nginx profile class is defined on /etc/puppet/modules/profiles/manifests/nginx.pp:


File: nginx.pp
--------------

class profiles::nginx {
  class{ '::nginx': }
}

The PHP profile class is defined on /etc/puppet/modules/profiles/manifests/php.pp:


File: php.pp
------------

class profiles::php {
  class{ '::php': }
}

And finally the Server Density agent profile class is defined on /etc/puppet/modules/profiles/manifests/serverdensity.pp:


File: serverdensity.pp
----------------------

class profiles::serverdensity {
  class{ '::serverdensity_agent': }
}

To finish, we need to install the modules we are using from the Puppet forge:

sudo puppet module install jfryman-nginx --modulepath /etc/puppet/modules
sudo puppet module install mayflower-php --modulepath /etc/puppet/modules
sudo puppet module install serverdensity-serverdensity_agent --modulepath /etc/puppet/modules

And we are done. Wait for the next Puppet run to apply changes or do it now in your target server with:

sudo puppet agent --test --debug

Further Steps

Keep your modules up to date

We installed 3rd party modules with the Puppet module subcommand. This was easy and simple. It was also suboptimal. This approach cannot handle conflicts between module dependencies. Modules might be outdated in the forge, and different versions may be fetched when updates are pushed. This can lead to version mismatch and other incompatibilities.

librarian-puppet is one way to deal with those challenges. It handles module maintenance (install, update, remove) with strict dependency checking. Sometimes, strict dependency checking can be too hard. That’s why librarian-puppet-simple was created. This is a cut down version of librarian-puppet (it includes no dependency checking).

One further step is r10k. It handles Puppet repositories at scale, using git branches and Puppet environments. Puppet infrastructure with r10k and How to build a Puppet repo using r10k with roles and profiles are great guides here.

Puppet Environments

Environments allow different configurations over the same Puppet code base. They are typically used when we work with more than one scenario, for example: development, QA, staging and production. Puppet environments are useful in any scenarios where the number of nodes might vary or configurations change (example: using a standalone database server for development and a replicated cluster for production).

Summary

Deploying Nginx (and PHP-FPM) with Puppet shouldn’t be that difficult. Existing modules make our life easier because we don’t have to reinvent the wheel. Wrapping them around role/profiles patterns allows customisation with no Puppet coding whatsoever (making use of Hiera).

What about you? Have you started using role/profiles or any other Puppet design patterns? How do you make your code reusable across different environments and projects?

Let us know in the comments or with pull requests against this GitHub repository.

The post Deploying nginx with Puppet appeared first on Server Density Blog.

by Jorge Salamero at February 09, 2016 07:32 PM

Rands in Repose

Operator is a monospace typeface from Hoefler&Co

From typography.com:

About two years ago, H&Co Senior Designer Andy Clymer proposed that we design a monospace typeface. Monospace (or “fixed-width”) typefaces have a unique place in the culture: their most famous ancestor is the typewriter, and they remain the style that designers reach for when they want to remind readers about the author behind the words. Typewriter faces have become part of the aesthetic of journalism, fundraising, law, academia, and politics; a dressier alternative to handwriting, but still less formal than something set in type, they’re an invaluable tool for designers.

I’ve dropped Operator Mono into both Sublime and Terminal. A monospace typeface needs to be readable and without a lot of opinion. After brief usage, Operator Mono easily meets both requirements.

#

by rands at February 09, 2016 07:11 PM

February 08, 2016

UnixDaemon

Puppet integration tests in (about) seven minutes

While puppet-lint and rspec-puppet (thanks to Tim Sharpe) will help ensure your Puppet code is both clean and produces what you’d expect in the compiled catalog there are times when you’ll want to go further than unit testing with rspec-puppet and do some basic integration tests to ensure the system ends up in the desired state. In this post, with the assumption that you have Docker installed, I’ll show a simple way to run basic integration tests against a Puppet module. Hopefully in about seven minutes.

In order to collect all the shinies we’ll use Docker, in conjunction with Hashicorps Packer, as our platform and we’ll write our unit tests in json and run them under a Go based test framework. If you want to read along without copy and pasting you can find the code in the Basic Puppet integration testing with Docker and Packer repo.

Packer is “a tool for creating machine and container images for multiple platforms from a single source configuration.” In our case we’ll use it to create a Docker image, install Puppet and our testing tools and then run the test case inside the container to confirm that everything did as we expected.

If you’re on Linux, and installing Packer by hand, it’s as simple as:

mkdir /tmp/puppet-and-packer
cd /tmp/puppet-and-packer/

wget -O packer.zip https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd64.zip
unzip packer.zip

./packer --version
0.8.6

Software installed we’ll be good devopsy people and write our tests first. For our simple example all we’ll do with puppet is install the strace package so we’ll write a simple Goss test to verify it was installed in the container. You can read more about Testing with Goss on this very site.

cat goss.json

{
    "package": {
        "strace": {
            "installed": true
        }
    }
}

Now we have our test cases we can write our tiny puppet module:

mkdir -p modules/strace/manifests
cat modules/strace/manifests/init.pp

class strace {

  package { 'strace': ensure => 'present', }

}

and the site.pp that uses it

cat site.pp
include strace

Tiny examples created we’ll get to the meat of the configuration, our Packer file.

{
  "builders": [{
    "type": "docker",
    "image": "ubuntu",
    "commit": "true"
 }],
 "provisioners": [{
   "type": "shell",
   "inline": [
     "apt-get update",
     "apt-get install puppet curl -y",
     "curl -L https://github.com/aelsabbahy/goss/releases/download/v0.0.22/goss-linux-amd64 > /usr/local/bin/goss && chmod +x /usr/local/bin/goss"
     ]
  }, {
    "type": "file",
    "source": "goss.json",
    "destination": "/tmp/goss.json"
  }, {
    "type": "puppet-masterless",
    "manifest_file": "site.pp",
    "module_paths": [ "modules" ]
  }, {
    "type": "shell",
    "inline": [
      "/usr/local/bin/goss -g /tmp/goss.json validate --format documentation"
    ]
  }]
}

As an aside we see why JSON is a bad format for application config. You can’t include comments in native JSON. In our example we’ll use two of Packers three main sections, builders and provisioners. Builders create and generate images from for various platforms, in our case Docker. Provisioners install and configure software inside our images. In the above code we’re using three different types.

  • inline shell to install our dependencies, such as Puppet and Goss
  • file to copy our tests in to the image
  • puppet-masterless to upload our code and run Puppet over it.

With all the parts in place we can now create a container and have it run our tests.

# and now we build the image and run tests inside the container
./packer validate strace-testing.json
./packer build strace-testing.json

==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: ubuntu
...
    # puppet output
    docker: Notice: Compiled catalog for 5a68ef0b80b1.udlabs.priv in environment production in 0.10 seconds
    docker: Info: Applying configuration version '1454970724'
    docker: Notice: /Stage[main]/Strace/Package[strace]/ensure: ensure changed 'purged' to 'present'
    docker: Info: Creating state file /var/lib/puppet/state/state.yaml
    docker: Notice: Finished catalog run in 8.59 seconds
...
    # goss output
==> docker: Provisioning with shell script: /tmp/packer-shell193374970
    docker: Package: strace: installed: matches expectation: [true]
    docker:
    docker:
    docker: Total Duration: 0.009s
    docker: Count: 1, Failed: 0
==> docker: Committing the container

I’ve heavily snipped Packers output and only shown the parts of relevance to this post. You can see the usual Puppet output, here showing the installation of the strace package

docker: Notice: /Stage[main]/Strace/Package[strace]/ensure: ensure changed 'purged' to 'present'

followed by the goss tests running and confirm all is as expected

docker: Package: strace: installed: matches expectation: [true]
docker: Count: 1, Failed: 0

It’s trivially simple to replace Goss in these examples with InSpec, ServerSpec or TestInfra if you’re more comfortable with those tools. I chose Goss for this post as it’s very simple to install and has no dependency chain. Something that’s very relevant when you’ve only given yourself seven minutes to show people a new, and hopefully useful, technique.

As a closing note, you’ll want to occasionally clean up the old Docker images this testing creates.

by dwilson@unixdaemon.net (Dean Wilson) at February 08, 2016 10:45 PM

LinuxHaxor

Instructions to Find A Realtor When Buying A Home

When you are currently purchasing a home, it is vital to have an expert on your side to guide you through the procedure. There are numerous obstructions that can come up along the route, and also a need for some incredible arranging abilities and adroit learning about business sector patterns, lawful matters and neighborhood temporary workers to help with any potential issues. The individual you have to swing to is an expert Realtor. Here are a few tips that can help you to locate the best Realtor for your home purchasing needs.

To begin with, take an ideal opportunity to search for a Realtor who has involvement in the areas you are occupied with. In the event that a medicaid contact number is not acquainted with the areas you will probably be purchasing in, he or she won’t be as capable at guiding you in the right course with regards to making an offer and arranging a cost on the home you wish to buy. This can imply that you wind up spending more than you need to or making an offer on a home you cherish that the dealer will never acknowledge in light of the fact that it is far to low taking into account market values.

When you have contracted down your rundown to a couple Realtors who are acquainted with your objective regions, take an ideal opportunity to meeting them. Get some information about their experience and level of experience, and also the organization they work for and how that alliance may help in your home pursuit. Talk about their favored strategies for correspondence, how they will supply you with posting reports on homes you may be keen on and how they handle times when they are far from the workplace, when you may require help. Attempt to discover a Realtor that you truly click with so you will be open to working with them for the following couple of months while you are taking a gander at homes and making an offer on the one you at last choose you need to purchase.

Notwithstanding meeting a few Realtors, it is imperative to require the investment to visit a couple houses with your main a few contenders. Make note of how the Realtor shows you homes and what kind of things they bring up in your home purchasing look. Keep in mind this is not an ideal opportunity to sign an agreement to solely work with one Realtor, rather it is a period to investigate their distinctive styles so as to locate the one that best matches your own style and inclinations.

Finding a Realtor is an imperative part of the home purchasing process. By taking after the strides recorded above, you will give yourself the most obvious opportunity with regards to finding a genuine expert who you appreciate working with. By doing this, you will make your home purchasing process smoother and less unpleasant for everybody included. Take the opportunity to precisely work through this procedure, as it will have all the effect as you are searching for your fantasy home.

by fdorra.haxor at February 08, 2016 06:14 PM

Everything Sysadmin

ACM Interviews: Thomas Limoncelli

I'm excited to announce that I've been interviewed as part of the ACM Interviews series. Listen to the 1-hour interview or read the summary via this link

ACM Interviews are part of the ACM Learning Center (click on Podcasts).

Over the last 20+ years Stephen Ibaraki's interviews have included famous computer scientists and innovators like Vint Cerf, Eric Schmidt, Leslie Lamport, and more. (Complete list here.) Stephen is involved in many professional organizations, he frequently addresses the United Nations, and has received numerous honors including being the first and only recipient of the Computing Canada IT Leadership Lifetime Achievement Award.

I was quite honored to be asked. (Actually I was confused... when approached at an ACM event last year I assumed Stephen was asking me to nominate people worth interviewing, not asking me to be interviewed!)

I consider this a major career milestone. I am grateful to all those that have helped me get to where I am today.

Background on the ACM:

The Association of Computing Machinery (the US representative to the United-Nations(UNESCO)-founded IFIP, International Federation for Information Processing):

The ACM reach is 3.4 million, with 1.5 million users of the digital library and is the largest and most prestigious international professional organization in computing science, education, research, innovation, professional practice (200 events and conferences, 78 newsletters/publications, 37 special interest groups such as SIGGRAPH, the top awards in computing science such as the ACM Turing Award -- the Turing is considered the Nobel Prize of Computing with a 1 Million USD prize.

February 08, 2016 03:00 PM

SLAPTIJACK

Removing a Single Line from known_hosts With sed

OpenSSH logo -left Ever so often, something changes on the network, and you find that your .ssh/known_hosts file has gotten out of date. Usually this happens after an upgrade or device change. You'll get the rather ominous warning that REMOTE HOST IDENTIFICATION HAS CHANGED!

If you are confident that someone isn't doing something nasty and the RSA key fingerprint on the other side has legitimately changed, you can safely remove the offending key and the new key will be added the next time you connect. Fortunately, this is easily done with a sed one-liner:

$ sed -i -e '185d' .ssh/known_hosts

In this case, '185' is the line number that was reported as containing the offending key.

by Scott Hebert at February 08, 2016 02:00 PM

Canllaith.org

VicTsing 3 in 1 Clip-On Camera Phone Lens

With VicTsing 3 in 1 Clip-On phone camera lens, you can take professional images right from your iPhone. These VicTsing lens are designed to give you a high class camera experience that can capture clearer images. Made from high quality lens, it can serve you for a long time without the need of buying a […]

by Canllaith at February 08, 2016 07:51 AM

February 07, 2016

Server Density

Canllaith.org

Nomadclip Lightning to USB Carabiner Clip for Apple Devices

Nomadclip is one of the innovative smartphone cables that doubles as a carabiner. This gadget was introduced to me by a Hawaii plumber when he came to service a leak in my home’s wall. When you use it to charge your phone, it works like a typical cable that’s used for charging. It also works in […]

by Canllaith at February 07, 2016 05:02 AM

February 06, 2016

Everything Sysadmin

LISA Conversations Episode 6: Alice Goldfuss on Scalable Meatfrastructure (LISA15)

In this episode we talked with Alice Goldfuss about the changes you need to make when growing a DevOps or sysadmin team. Alice also talked about dealing with remote workers, her experience at film school, plus she shares insights about giving your first presentation at a conference.

You don't want to miss this!

For the complete list of LISA Conversations, visit our homepage.

February 06, 2016 05:00 PM

Canllaith.org

Best Bluetooth Headphones Out There? (February 2016)

QY8 Bluetooth Headphones Review Technological advancement has paved way for the development of Bluetooth headphones that are truly convenient for use anywhere you are. Everyone loves music whatever genre it can be. It’s as if we cannot live without music thus, this new fad of headphones offers an exciting music experience to all. The QY8 […]

by Canllaith at February 06, 2016 07:55 AM


Administered by Joe. Content copyright by their respective authors.