Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

September 04, 2015

Google Blog

Through the Google lens: Search Trends August 28–September 3

With the long Labor Day weekend beckoning, we’ll spare you the introduction and dive right into the past week of search trends.

Party at the VMAs
It’s been nearly a week since it all went down, but given the number of trending topics related to the Sunday evening spectacle known as the 2015 MTV Video Music Awards, we feel almost obligated to recap how it played out on Search. So, here are the highlights: bizarre but wonderful host Miley Cyrus pulled in a cool 5 million searches, while her onstage confrontation with Nicki Minaj (which may or may not have been planned) got another 500,000+. Kanye West accepted the show’s highest honor—the Michael Jackson Vanguard Award—in a rambling 13-minute speech (during which he may or may not have committed to running for president in 2020), racking up more than 200,000 searches.

Justin Bieber cried while performing his new single, “What Do You Mean,” inspiring 500,000+ searches for the performance and another 500,000+ for the song; and finally, Taylor Swift—no stranger to VMA drama featuring Kanye West and acceptance speeches, as well as public spats with Nicki Minaj—premiered her video for “Wildest Dreams,” (100,000+ searchers wanted to know more). For more trends from the show (and to find out which of these artists claimed the “Most Searched” title), check out the trends page.

Kentucky courthouse drama
A Kentucky county clerk found new notoriety this week, appearing in Hot Trends not just once but three times. Kim Davis has repeatedly refused to issue marriage licenses to same-sex couples, claiming it would infringe on her religious beliefs. Multiple couples sued her, and Judge David L. Bunning ordered her to issue the licenses. Finally, after her request for a stay from the Supreme Court was denied, yesterday Davis was held in contempt of court. With Davis in jail, her deputies began issuing licenses to couples today. As the saga played out in Rowan County, people turned to the web with inquiries ranging from “What religion is Kim Davis?” and “What law did Kim Davis break?” to more broad questions like “Why do we need marriage licenses?” and “How long have there been marriage licenses in the U.S.?”

Fall fun
The days are getting shorter (in the Northern Hemisphere, at least) and the long Labor Day weekend marks the unofficial end of summer. In the U.S., people have turned to the web to learn more about the origins of Labor Day and to ask an important fashion question: “Why can’t you wear white after Labor Day?” Our advice: don’t let anyone tell you you can’t.)

Autumn may make some melancholy, but for football fans it’s a time to rejoice. Tomorrow marks the first college football Saturday of the season, and searchers are gearing up in anticipation. Yesterday’s Michigan game against Utah drew 500,000+ searches as people looked to find more about the game. As the debut game for new Michigan coach Jim Harbaugh, expectations were high, but deflated (see below) when Utah won 24-17. But it’s Michigan rival—and defending national champs—Ohio State that lit up the scoreboard as the most searched team over the past month:

Finally, though the NFL season doesn’t officially start until next Thursday, the league is in the news after Patriots QB Tom Brady’s four-game suspension for “Deflategate” was overturned. Brady was the top trend Thursday, with 1 million searches, as people asked questions like “Is Tom Brady suspended?” and “What does ‘nullified’ mean?”

by Google Blogs ( at September 04, 2015 04:04 PM

Debian Admin

Linux Utilities Cookbook (a $26.99 value) FREE for a limited time!

Over 70 recipes to help you accomplish a wide variety of tasks in Linux quickly and efficiently.

Everything you need to know about Linux but were afraid to ask. This book will make you a master of the command line and teach you how to configure the network, write shell scripts, build a custom kernel, and much more.

by ruchi at September 04, 2015 01:58 PM


MQTT vs Websockets vs HTTP/2: The Best IoT Messaging Protocol?

While doing any sort of development for an Internet of Things (IoT) connected device such as an Arduino, Raspberry Pi, or other embedded platform, the question inevitably comes up as to what is the best messaging protocol to use? Assuming your application can’t use straight up web pages, there are really two viable options at the moment and one up-coming.


MQ Telemetry Transport (MQTT) from IBM is the bad boy of small device messaging. It uses TCP, is lightweight, and has features beneficial to IoT devices. It is mature and there are a lot of client and server implementations, making it easier to develop upon.

MQTT works on a pub/sub architecture. A client subscribes to a channel on a server, and when a server receives new information for that channel, it pushes it out to that device.

Another great feature of MQTT is that you can set a priority or Quality of Service (QoS):

  • 0: The client/server will deliver the message once, with no confirmation required.
  • 1: The client/server will deliver the message at least once, confirmation required.
  • 2: The client/server will deliver the message exactly once by using a handshake process.

Each level uses more bandwidth but will give you varying assurance of deliverability.

With MQTT your application must have a library to talk to the MQTT server and handle publish/subscribe methods.


Now that browser websockets are standardized, they make a very compelling option for reading and writing data to IoT devices. Websockets are the best way to achieve full push/pull communications to a web server, which is not possible over basic HTTP protocol.

Websockets are great if you have a full web client! However, with many IoT devices, it is a lot of overhead which might not even be an option.

Another downside of using Websockets is that you would need to come up with your own protocol for the transmission of data. This isn’t too difficult (for example, using JSON) but does create a non-standard protocol.

MQTT over Websockets

A frequent use of Websockets is to actually use MQTT over Websockets. With this configuration, you can use the Paho JS MQTT client to make the connections and to read/write data. This makes for very easy development, but again you must be able to run the full browser stack on the device, so this can be very limiting. Since it does use MQTT, it may be beneficial to use this for any webpage MQTT reading of data. There is also some configuration to be done in the background to create the proxy between MQTT and Websocket, however there are plenty of services such as HiveMQ to do this for you.


The new HTTP protocol, soon to be seen everywhere, usually a totally different structure than HTTP/1.1. Where HTTP/1 was based on frames/packets, HTTP/2 is a streaming protocol. It supports a few features useful in this situation such as the ability for the client to suspend data streaming, however also has some drawbacks. HTTP/2 was designed to have the client remain connected to the server for the remainder of the browsing session. If using HTTP/2 as a messaging protocol, it would mean leaving this connection open indefinitely. You also miss out on some of the features of MQTT such as QoS. HTTP/2 is so new that some of these features could in theory be added to the protocol and libraries, however it is not yet mature enough to say whether that will happen.

For more discussion on this, check out Can HTTP/2 Replace MQTT from @kellogh.

What’s the Best?

Right now for most applications, I would say MQTT is the best protocol to use for most IoT devices. Since it is so widely adopted, even newer solutions like Websockets support MQTT in some respects so your device will be able to communicate its data effectively to other devices over the internet.

I’m excited to see where HTTP/2 takes us with regards to the Internet of Things. The protocol is better suited for lightweight communication and could definitely be a contender in the future.

Photo courtesy NodeMCU Wikipedia

by Dave at September 04, 2015 01:38 PM

September 03, 2015

Trouble with tribbles

Tribblix Graphical Login

Up to this point, login to Tribblix has been very traditional. The system boots to a console login, you enter your username and password, and then start your graphical desktop in whatever manner you choose.

That's reasonable for old-timers such as myself, but we can do better. The question is how to do that.

OpenSolaris, and thus OpenIndiana, have used gdm, from GNOME. I don't have GNOME, and don't wish to be forever locked in dependency hell, so that's not really an option for me.

There's always xdm, but it's still very primitive. I might be retro, but I'm also in favour of style and bling.

I had a good long look at LightDM, and managed to get that ported and running a while back. (And part of that work helped get it into XStreamOS.) However, LightDM is a moving target, it's evolving off in other directions, and it's quite a complicated beast. As a result, while I did manage to get it to work, I was never happy enough to enable it.

I've gone back to SLiM, which used to be hosted at BerliOS. The current source appears to be here. It has the advantage of being very simple, with minimal dependencies.

I made a few modification and customizations, and have it working pretty well. As upstream doesn't seem terribly active, and some of my changes are pretty specific, I decided to fork the code, my repo is here.

Apart from the basic business of making it compile correctly, I've put in a working configuration file, and added an SMF manifest.

SLiM doesn't have a very good mechanism for selecting what environment you get when you log in. By default it will execute your .xinitrc (and fail horribly if you don't have one). There is a mechanism where it can look in /usr/share/xsessions for .desktop files, and you can use F1 to switch between them, but there's no way currently to filter that list, or tell it what order to show then in, or have a default. So I switched that bit off.

I already have a mechanism in Tribblix to select the desktop environment, called tribblix-session. This allows you to use the setxsession and setvncsession commands to define which session you want to run, either in regular X (via the .xinitrc file) or using VNC. So my SLiM login calls a script that hooks into and cooperates with that, and then falls back on some sensible defaults - Xfce, MATE, WindowMaker, or - if all else fails - twm.

It's been working pretty well so far. It can also do automatic login for a given user, and there are magic logins for special purposes (console, halt, and reboot, with the root password).

Now what I need is a personalized theme.

by Peter Tribble ( at September 03, 2015 08:27 PM

Everything Sysadmin

CfP: USENIX Container Management Summit (UCMS '15)

The 2015 USENIX Container Management Summit (UCMS '15) will take place November 9, 2015, during LISA15 in Washington, D.C.

Important Dates

  • Submissions due: September 5, 2015, 11:59 p.m. PDT
  • Notification to participants: September 19, 2015
  • Program announced: Late September 2015

(quoting the press release):

UCMS '15 is looking for relevant and engaging speakers and workshop facilitators for our event on November 9, 2015, in Washington, D.C. UCMS brings together people from all areas of containerization--system administrators, developers, managers, and others--to identify and help the community learn how to effectively use containers.

Submissions Proposals may be 45- or 90-minute formal presentations, panel discussions, or open workshops.

This will be a one-day summit. Speakers should be prepared for interactive sessions with the audience. Workshop facilitators should be ready to challenge the status quo and provide real-world examples and strategies to help attendees walk away with tools and ideas to improve their professional lives. Presentations should stimulate healthy discussion among the summit participants.

Submissions in the form of a brief proposal are welcome though September 5, 2015. Please submit your proposal via email to You can also reach the chairs via that email address with any questions or comments. Presentation details will be communicated to the presenters of accepted talks and workshops by September 19, 2015. Speakers will receive a discount for the conference admission. If you have special circumstances, please contact the USENIX office at

Click for more info.

September 03, 2015 05:28 PM

Yellow Bricks

VMworld Session: VSAN – Software Defined Storage Platform of the Future #STO6050

Advertise here with BSA

Unfortunately I haven’t been able to attend too many sessions, only 2 so far. This is one I didn’t want to miss as it was all about what VMware is working on for VSAN and layers that could sit on top of VSAN. Rawlinson and Christos spoke about where VSAN is today first. Mainly discussion the use cases (monolithic apps like Exchange, SQL etc.) and the simplicity VSAN brings. After which an explanation of the VSAN object/component model was provided which was the lead in to the future.

We are in the middle of an evolution towards cloud native applications Christos said. Cloud native apps scale in a different way then traditional apps, and their requirements differ. Usually not a need for HA and DRS, and will contain this functionality within their own framework. What does this result in for the vSphere layer?

VMware vSphere Integrated Containers and VMware Photon Platform enabled these new types of applications. But how do we enable these from a storage point of view? What kind of scale will we require? Will we need different data services? Will we need to different tools, what about performance?

First project being discussed is the Performance Service which will come as part of the Health Check plugin. Providing cluster level, host level, disk group level, disk level… The Performance Service Architecture is very interesting and is not a “standard vCenter Server service”. Providing deep insights using per host traces is not possible as it would not scale. A distributed model is proposed which will enable this, but in a decentralized way. Each host can collect data, each cluster can roll this up, and this can be done for many clusters. Data is both processed and stored in a distributed fashion. The cost for a solution like this should be around 10% of 1 core on a server. Just think what a vCenter Server would look like if you had the same type of scale and cost, with a 1000 host solution could easily result in a 100 vCPU requirement, which is not realistic.

Rawlinson demoes a potential solution for this, in this scenario we are talking 1000s of hosts of which data is gathered, analyzed and presented in what appears to be an HTML-5 interface. The solution doesn’t just provides details on the environment it also allows you to mitigate these problems. Note that this is a prototype of an interface that may or may not at some point in time be released. If you like what you see though, make sure to leave a comment as I am sure that helps making this prototype reality!

Next being discussed is the potential to leverage VSAN not just for virtual machines, but also for containers, having the capabilities to store files on top of VSAN. A distributed file system for cloud native apps is now introduced. Some of the requirements for a distributed file system would be a scalable data path, clones at massive scale, multi-tenancy and multi-purpose.

VMware is also prototyping a distributed file system and have it running in their labs. It sits on top of VSAN and leverages that scalable path and uses it to store its data and metadata. Rawlinson demonstrates how he can create 2000 clones of a file in under a second across a 1000 host and runs his application. Note that this application isn’t copied to those 1000 hosts, but it is a simple mountpoint on 1000 hosts, truly distributed filesystem with extremely scalable clone and snapshot technology.

Christos wraps up, key points are that VSAN will be the enabler of future storage solutions as it provides extreme scale, with at a low resource overhead. Awesome session, great peak in to the future.

"VMworld Session: VSAN – Software Defined Storage Platform of the Future #STO6050" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at September 03, 2015 05:22 PM

Server Density

Building Your own Chaos Monkey

In 2012 Netflix introduced one of the coolest sounding names into the Cloud vernacular.

What Chaos Monkey does is simple. It runs on Amazon Web Services and its sole purpose is to wipe out production instances in a random manner.

The rationale behind those deliberate failures is a solid one.

Setting Chaos Monkey loose on your infrastructure—and dealing with the aftermath—helps strengthen your app. As you recover, learn and improve on a regular basis, you’re better equipped to face real failures without significant, if any, customer impact.

Our monkey

Since we don’t use AWS or Java, we decided to build our own lightweight simian in the form of a simple Python script. The end-result is the same. We set it loose on our systems and watch as it randomly seeks and destroys production instances.

What follows is our observations from those self-inflicted incidents, followed by some notes on what to consider when using a Chaos Monkey on your infrastructure.

Monkey Island - 3 headed monkey

Design Considerations

1. Trigger chaos events during business hours

It’s never nice to wake up your engineers with unnecessary on-call events in the middle of the night. Real failures can and do happen 24/7. When it comes to Chaos Monkey, however, it’s best to trigger failures when people are around to respond and fix them.

2. Decide how much mystery you want

When our Python script triggers a chaos event, we get a message in our HipChat room and everyone is on the look out for strange things.

The message doesn’t specify what the failure is. We still need to triage the alerts and determine where the failures lie, just as we would in the event of a real outage. All this “soft” warning does is lessen the chance of failures going unnoticed.

3. Have several failure modes

Killing instances is a good way to simulate failures but it doesn’t cover all possible contingencies. At Server Density we use the SoftLayer API to trigger full and partial failures alike.

A server power-down, for example, causes a full failure. Disabling networking interfaces, on the other hand, causes partial failures where the host may continue to run (and perhaps even send reports to our monitoring service).

4. Don’t trigger sequential events

If there’s ever a bad time to set your Chaos Monkey loose, that’s during the aftermath of previous chaos event. Especially if the bugs you discovered are yet to be fixed.

We recommend you wait a few hours before introducing the next failure. Unless you want your team firefighting all day long.

5. Play around with event probability

Real world incidents have a tendency to transpire when you least expect them. So should your chaos events. Make them infrequent. Make them random. Space them out, by days even. That’s the best way to test your on-call readiness.

Initial findings

We’ve been triggering chaos events for some time now. None of the issues we’ve discovered so far were caused by the server software. In fact, scenarios like failovers in load balancers (Nginx) and databases (MongoDB) worked very well.

Every single bug we found was in our own code. Most had to do with how our app interacts with databases in failover mode, and with libraries we’ve not written.

In our most recent Chaos run we experienced some inexplicable performance delays during two consecutive MongoDB server failovers. Rebooting the servers was not a viable long term fix as it results in a long downtime (>5 minutes).

It took us several days of investigation until we realised we were not invoking the mongoDB drivers properly.

The app delays caused by the Chaos run happened during work hours. We were able to look at the issue immediately, rather than wait until an on-call engineer gets notified and is able to respond, in which case the investigation would’ve been harder.

Such discoveries help us report bugs and improve the resiliency of our software. Of course, it also means additional engineering hours and effort to get things right.


The Chaos Monkey is an excellent tool to test how your infrastructure behaves under unknown failure conditions. By triggering and dealing with random system failures, you help your product and service harden up and become resilient. This has obvious benefits to your uptime metrics and overall quality of service.

And if the whole exercise has such a cool name attached to it, then all the better.

Editor’s note: This post was originally published on 21st November, 2013 and has been completely revamped for accuracy and comprehensiveness.


The post Building Your own Chaos Monkey appeared first on Server Density Blog.

by David Mytton at September 03, 2015 01:00 PM

Everything Sysadmin

CfP: USENIX Release Engineering Summit (URES '15)

Hey all you devops, CI/CD/CD people! Hey all you packagers, launchers, and shippers. Hey all your containers mavins and site reliability engineers!

Submissions due: September 4, 2015 - 11:59 pm

(quoting the press release):

At the third USENIX Release Engineering Summit (URES '15), members of the release engineering community will come together to advance the state of release engineering, discuss its problems and solutions, and provide a forum for communication for members of this quickly growing field. We are excited that this year LISA attendees will be able to drop in on talks so we expect a large audience.

URES '15 is looking for relevant and engaging speakers for our event on November 13, 2015, in Washington, D.C. URES brings together people from all areas of release engineering--release engineers, developers, managers, site reliability engineers and others--to identify and help propose solutions for the most difficult problems in release engineering today.

Click for more info.

September 03, 2015 04:28 AM


How to Block a Program From the Internet in Windows 10

This tutorial will take you every single step of the way through creating a Windows Firewall Rule to block a specific program (whichever you want) in Windows 10.

  1. Start out by clicking the Windows 10 Start Button and in the Search section type the word firewall. One of the items that will be displayed is Windows Firewall | Control Panel. Select that one.
  2. You’ll be presented with the main Windows 10 Firewall screen.

  3. click to enlarge

  4. From the column on the left side of the window, click the Advanced Settings… item.
  5. Now you’ll be presented with the Advanced section of the Windows 10 Firewall.

  6. click to enlarge

  7. Select the Outbound Rules item from the left-most column.
  8. Now you’ll be presented with the Advanced Outbound Rules section.

  9. click to enlarge

  10. This time we’re going to look in the column on the right side of the screen, titled Outbound Rules…. From this section, select New Rule…
  11. When asked which type of connection you want to block, select Program and then click the Next > button.
  12. Since you only want to block one program (not them all) – click the Browse… button next to This program path:. NOTE: you can of course block more than one program by creating multiple rules.
  13. Click through the folders on your PC until you find the Application you want to block from accessing the Internet. If you’re having trouble locating it, it’s probably in the Program Files folder, likely in a sub-folder with either the program name or company name as a part of the folder name itself. When you’ve found it, select it by clicking on it once, and then click the Open button.

  14. click to enlarge

  15. Click the Next > button to continue.
  16. Select Block the connection and then click the Next > button.
  17. Make sure that all three items (Domain, Private and Public) have “check marks” next to them, and then click the Next > button.
  18. Give your newly created “rule” a name and quick description, and finally, click the Finish button.
  19. You’ll be returned to the Advanced Outbound Windows Firewall Settings section.

  20. click to enlarge

  21. This time, the column on the right side (Actions) will have your newly created Rule in it!
  22. As stated earlier in this tutorial, you can do this for as many programs in Windows 10 as you’d like. You can even use the “Copy” button to create similar rules without having to go through every step again.

by Ross McKillop at September 03, 2015 01:37 AM

September 02, 2015

Trouble with tribbles

The 32-bit dilemma

Should illumos, and the distributions based on it - such as Tribblix - continue to support 32-bit hardware?

(Note that this is about the kernel and 32-bit hardware, I'm not aware of any good cause to start dropping 32-bit applications and libraries.)

There are many good reasons to go 64-bit. Here are a few:

  • Support for 32-bit SPARC hardware never existed (Sun dropped it with Solaris 10, before OpenSolaris)
  • Most new hardware is 64-bit, new 32-bit systems are very rare
  • Even "old" x86 hardware is now 64-bit
  • DragonFly BSD went 64-bit
  • Solaris 11 dropped 32-bit
  • SmartOS is 64-bit only
  • Applications - or runtimes such as GO and Java 8 - are starting to only exist in 64-bit versions
  • We're maintaining, building, and shipping code that effectively nobody uses, and is therefore effectively untested
  • The time_t problem (traditional 32-bit time runs out in 2038)

So, I know I'm retro and all, but it's getting very hard to justify keeping 32-bit support.

Going to a model where we just support 64-bit hardware has other advantages:

  • It makes SPARC and x86 equivalent
  • We can make userland 64-bit by default
  • ...which makes solving the time_t problem easier
  • ...and any remaining issues with large files and 64-bit inode numbers go away
  • We can avoid isaexec
  • At least on x86, 64-bit applications perform better
  • Eliminating 32-bit drivers and kernel makes packages and install images smaller

Are there any arguments for keeping 32-bit hardware support?

  • It's a possible differentiator - a feature we have that others don't. On the other hand, if the potential additional user base is zero then it makes no difference
  • There is still some existence of 32-bit hardware (mainly in embedded contexts)
Generally, though, the main argument against ripping out 32-bit hardware support is that it's going to be a certain amount of work, and the illumos project doesn't have that much in the way of spare resources, so the status quo persists.

My own plan for Tribblix was that once I had got to releasing version 1 then version 2 would drop 32-bit hardware support. (I don't need illumos to drop it, I can remove the support as I postprocess the illumos build and create packages.) As time goes on, I'm starting to wonder whether to drop 32-bit earlier.

by Peter Tribble ( at September 02, 2015 06:27 PM

Google Blog

Google Docs and Classroom: your school year sidekicks

School’s in! As you settle into your classes and start to juggle soccer practice, club meetings and homework, we’re here to help. We’ve been spending the summer “break” creating new tools to help you save time, collaborate with classmates and create your best work—all for free.

Schoolwork, minus the work 

Writing papers is now a lot easier with the Research tool in Docs for Android. You can search Google without leaving Docs, and once you find the quotes, facts or images you’re looking for, you can add them to your document with just a couple taps. That means less time switching between apps, and more time perfecting your thesis statement.
With Voice typing, you can record ideas or even compose an entire essay without touching your keyboard. To get started, activate Voice typing in the Tools menu when you're using Docs in Chrome. Then, when you’re on the go, just tap the microphone button on your phone’s keyboard and speak your mind. Voice typing is available in more than 40 languages, so we can help with your French homework, too. Voilà!

Do more, together

We’ve made it easier for you to tell what was added or deleted in Docs—and who made the changes. Now when you’ve left a document and you come back to it later, you can just click “See new changes” to pick up right where your classmates left off.

Forms helps you get a lot of information easily and in one place—so when you want to vote on your class field trip or collect T-shirt sizes for your team, you don’t have to sort through dozens of emails. With the new Forms, you can survey with style—choose one of the colorful new themes or customize your form with your own photo or logo, and we’ll choose the right color palette to match. Easily insert images, GIFs or videos and pick from a selection of question formats. Then send out your survey and watch as the responses roll in!

Your best work, your best you 

Creating presentations, crafting newsletters and managing your team’s budget is hard enough without having to worry about making everything look good. With the new collection of templates in Docs, Sheets and Slides, you can focus on your content while we make sure it gets the expert polish it deserves. Choose from a wide variety of reports, portfolios, resumes and other pre-made templates designed to make your work that much better, and your life that much easier.

With Explore in Sheets, you can now spend less time trying to decipher your data, and more time making a point. Explore creates charts and insights automatically, so you can visualize trends and understand your data in seconds on the web or on your Android. It’s like having an expert analyst right by your side.

Mission control, for teachers and students

A year ago, we launched Classroom to save teachers and students time and make it easier to keep classwork organized. Today we’re launching a Share to Classroom Chrome extension to make it easy for teachers to share a website with the entire class at the same time—no matter what kind of laptop students have. Now the whole class can head to a web page together, without losing precious minutes and focus to typos.

Rock this school year with Google Docs and Classroom. Your first assignment? Try these new features, which are rolling out today.

Posted by Ritcha Ranjan, Product Manager

by Google Blogs ( at September 02, 2015 12:00 PM

September 01, 2015

Everything Sysadmin

FreeBSD Journal Reviews TPOSANA

Greg Lehey wrote an excellent review of The Practice of System and Network Administration in the new issue of The FreeBSD journal. Even though the book isn't FreeBSD-specific, I'm glad FJ was drawn to reviewing the book.

For more about the FreeBSD Journal, including how to subscribe or purchase single issues, visit their website:

I'm a subscribed to the journal and I highly recommend it. The articles are top notch. Even if you don't use FreeBSD, the articles are a great way to learn about advanced technology and keep up with the industry.

September 01, 2015 04:28 PM

Yellow Bricks

VMworld 2015: Site Recovery Manager 6.1 announced

Advertise here with BSA

This week Site Recovery Manager 6.1 was announced. There are many enhancements in SRM 6.1 like the integration with NSX for instance and policy driven protection, but personally I feel that support for stretched storage is huge. When I say stretched storage I am referring to solutions like EMC VPLEX, Hitachi Virtual Storage Platform and IBM San Volume Controller(etc). In the past, and you can still today, when you had these solutions deployed you would have a single vCenter Server with a single cluster and moved VMs around manually when needed, or let HA take care of restarts in failure scenarios.

As of SRM 6.1 running these types of stretched configurations is now also supported. So how does that work, what does it allow you to do, and what does it look like? Well in contrary to a vSphere Metro Storage Cluster solution with SRM 6.1 you will be using two vCenter Server instances. These two vCenter Server instances will have an SRM server attached to it which will use a storage replication adaptor to communicate to the array.

But why would you want this? Why not just stretch the compute cluster also? Many have deployed these stretched configurations for disaster avoidance purposes. The problem is however that there is no form of orchestration whatsoever. This means that all workloads will come up typically in a random fashion. In some cases the application knows how to recover from situations like that, in most cases it does not… Leaving you with a lot of work, as after a failure you will now need to restart services, or VMs, in the right order. This is where SRM comes in, this is the strength of SRM, orchestration.

Besides doing orchestration of a full failover, what SRM can also do in the 6.1 release is evacuate a datacenter using vMotion in an orchestrated / automated way. If there is a disaster about to happen, you can now use the SRM interface to move virtual machines from one datacenter to another, with just a couple of clicks, planned migration is what it is called as can be seen in the screenshot above.

Personally I think this is a great step forward for stretched storage and SRM, very excited about this release!

"VMworld 2015: Site Recovery Manager 6.1 announced" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at September 01, 2015 02:47 PM


My Free Software Activities in August 2015

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 6.5 hours on Debian LTS. In that time I did the following:

  • Prepared and released DLA-301-1 fixing 2 CVE in python-django.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 11 commits to the security tracker.

Apart from that, I also gave a talk about Debian LTS at DebConf 15 in Heidelberg and also coordinated a work session to discuss our plans for Wheezy. Have a look at the video recordings:

DebConf 15

I attended DebConf 15 with great pleasure after having missed DebConf 14 last year. While I did not do lots of work there, I participated in many discussions and I certainly came back with a renewed motivation to work on Debian. That’s always good. :-)

For the concrete work I did during DebConf, I can only claim two schroot uploads to fix the lack of support of the new “overlay” filesystem that replaces “aufs” in the official Debian kernel, and some Distro Tracker work (fixing an issue that some people had when they were logged in via Debian’s SSO).

While the numerous discussions I had during DebConf can’t be qualified as “work”, they certainly contribute to build up work plans for the future:

As a Kali developer, I attended multiple sessions related to derivatives (notably the Debian Derivatives Panel).

I was also interested by the “Debian in the corporate IT” BoF led by Michael Meskes (Credativ’s CEO). He pointed out a number of problems that corporate users might have when they first consider using Debian and we will try to do something about this. Expect further news and discussions on the topic.

Martin Kraff, Luca Filipozzi, and me had a discussion with the Debian Project Leader (Neil) about how to revive/transform the Debian’s Partner program. Nothing is fleshed out yet, but at least the process initiated by the former DPL (Lucas) is again moving forward.

Other Debian work

Sponsorship. I sponsored an NMU of pep8 by Daniel Stender as it was a requirement for prospector… which I also sponsored since all the required dependencies are now available in Debian. \o/

Packaging. I NMUed libxml2 2.9.2+really2.9.1+dfsg1-0.1 fixing 3 security issues and a RC bug that was breaking publican. Since there’s no upstream fix for more than 8 months, I went back to the former version 2.9.1. It’s in line with the new requirement of release managers… a package in unstable should migrate to testing reasonably quickly, it’s not acceptable to keep it unfixed for months. With this annoying bug fixed, I could again upload a new upstream release of publican… so I prepared and uploaded 4.3.2-1. It was my first source only upload. This release was more work than I expected and I filed no less than 3 bug to upstream (new bash-completion install path, request to provide sources of a minified javascript file, drop a .po file for an invalid language code).

GPG issues with smartcard. Back from DebConf, when I wanted to sign some key, I stumbled again upon the problem which makes it impossible for me to use my two smartcards one after the other without first deleting the stubs for the private key. It’s not a new issue but I decided that it was time to report it upstream, so I did it: #2079 on Some research helped me to find a way to work-around the problem. Later in the month, after a dist-upgrade and a reboot, I was no longer able to use my smartcard as a SSH authentication key… again it was already reported but there was no clear analysis, so I tried to do my own one and added the results of my investigation in #795368. It looks like the culprit is pinentry-gnome3 not working when started by the gpg-agent which is started before the DBUS session. Simple fix is to restart the gpg-agent in the session… but I have no idea yet of what the proper fix should be (letting systemd manage the graphical user session and start gpg-agent would be my first answer, but that doesn’t solve the issue for users of other init systems so it’s not satisfying).

Distro Tracker. I merged two patches from Orestis Ioannou fixing some bugs tagged newcomer. There are more such bugs (I even filed two: #797096 and #797223), go grab them and do a first contribution to Distro Tracker like Orestis just did! I also merged a change from Christophe Siraut who presented Distro Tracker at DebConf.

I implemented in Distro Tracker the new authentication based on SSL client certificates that was recently announced by Enrico Zini. It’s working nice, and this authentication scheme is far easier to support. Good job, Enrico! broke during DebConf, it stopped being updated with new data. I tracked this down to a problem in the archive (see #796892). Apparently Ansgar Burchardt changed the set of compression tools used on some jessie repositorie, replacing bz2 by xz. He dropped the old Packages.bz2 but missed some Sources.bz2 which were thus stale… and APT reported “Hashsum mismatch” on the uncompressed content.

Misc. I pushed some small improvement to my Salt formulas: schroot-formula and sbuild-formula. They will now auto-detect which overlay filesystem is available with the current kernel (previously “aufs” was hardcoded).


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

by Raphaël Hertzog at September 01, 2015 11:49 AM


How to Use Finder to Create a Workflow

This guide will show you some tips on making the most of what isn’t considered the greatest File Manager in the world, your Mac’s Finder. By customizing the Toolbar a bit, you can create time saving ‘workflows’ for repetitive and/or frequent tasks.

One of the things I find myself doing a lot is taking screenshots and editing them for tutorials, exactly like this one. I’ve come up with a way to customize the Toolbar in Finder by adding shortcuts to Apps and Utilities that I use to edit images for these tutorials.

Something like this could be mimicked if you’re a photographer, videographer, graphic artist etc. Especially people who have to ‘batch edit’ or work with multiple files at a time.

Here’s my setup –

  1. My Finder’s Toolbar has a mix of both the traditional tools like “back” and “forward”, the “layout” icon set, a Trash button etc. Then there are the custom Apps that I’ve added, separated by a small space.

  2. click to enlarge

  3. To edit your Toolbar, select View from the main menu, then Customize Toolbar… from the drop-down list.
  4. Now you can drag-and-drop items to and from your Toolbar.

  5. click to enlarge

  6. I have a section set aside for a series of Apps – the GIMP image editor, ThumbsUp for batch resizing images, Skitch for adding notations, and finally ImageOptim to properly compress the final images.

    To add an App to your Toolbar, open another Finder window and drag-and-drop it onto your Toolbar (while the Customize Toolbar is still open) – as illustrated in the screenshot below.

  7. By dragging and dropping files onto these Apps I can quickly accomplish what I want to do. As seen in the screenshot below, when I drag images onto ThumbsUp, it quickly resizes them to a pre-specified width.
  8. The resulting files are outputted to the same folder as the original files, so that…
  9. I can drag all of them to the final part of my workflow, compressing them with ImageOptim.
  10. By having a quick place to drag-and-drop files it allows me the flexibility of only opening files I need to with one App, then all of the files (if needed) with the next – and all from within Finder.

If you’ve found other interesting ways to get the most out of Finder that you can, by all means please leave a comment!

by Ross McKillop at September 01, 2015 10:00 AM

Google Webmasters

Mobile-friendly web pages using app banners

When it comes to search on mobile devices, users should get the most relevant answers, no matter if the answer lives in an app or a web page. We’ve recently made it easier for users to find and discover apps and mobile-friendly web pages. However, sometimes a user may tap on a search result on a mobile device and see an app install interstitial that hides a significant amount of content and prompts the user to install an app. Our analysis shows that it is not a good search experience and can be frustrating for users because they are expecting to see the content of the web page.

Starting today, we’ll be updating the Mobile-Friendly Test to indicate that sites should avoid showing app install interstitials that hide a significant amount of content on the transition from the search result page. The Mobile Usability report in Search Console will show webmasters the number of pages across their site that have this issue.

After November 1, mobile web pages that show an app install interstitial that hides a significant amount of content on the transition from the search result page will no longer be considered mobile-friendly. This does not affect other types of interstitials. As an alternative to app install interstitials, browsers provide ways to promote an app that are more user-friendly.

App install interstitials that hide a significant amount of content provide a bad search experience

App install banners are less intrusive and preferred

App install banners are supported by Safari (as Smart Banners) and Chrome (as Native App Install Banners). Banners provide a consistent user interface for promoting an app and provide the user with the ability to control their browsing experience. Webmasters can also use their own implementations of app install banners as long as they don’t block searchers from viewing the page’s content.

If you have any questions, we’re always happy to chat in the Webmaster Central Forum.

Posted by Daniel Bathgate, Software Engineer, Google Search.

by Google Webmaster Central ( at September 01, 2015 10:53 AM

Google Blog

Google’s look, evolved

Google has changed a lot over the past 17 years—from the range of our products to the evolution of their look and feel. And today we’re changing things up once again:

So why are we doing this now? Once upon a time, Google was one destination that you reached from one device: a desktop PC. These days, people interact with Google products across many different platforms, apps and devices—sometimes all in a single day. You expect Google to help you whenever and wherever you need it, whether it’s on your mobile phone, TV, watch, the dashboard in your car, and yes, even a desktop!

Today we’re introducing a new logo and identity family that reflects this reality and shows you when the Google magic is working for you, even on the tiniest screens. As you’ll see, we’ve taken the Google logo and branding, which were originally built for a single desktop browser page, and updated them for a world of seamless computing across an endless number of devices and different kinds of inputs (such as tap, type and talk).

It doesn’t simply tell you that you’re using Google, but also shows you how Google is working for you. For example, new elements like a colorful Google mic help you identify and interact with Google whether you’re talking, tapping or typing. Meanwhile, we’re bidding adieu to the little blue “g” icon and replacing it with a four-color “G” that matches the logo.

This isn’t the first time we’ve changed our look and it probably won’t be the last, but we think today’s update is a great reflection of all the ways Google works for you across Search, Maps, Gmail, Chrome and many others. We think we’ve taken the best of Google (simple, uncluttered, colorful, friendly), and recast it not just for the Google of today, but for the Google of the future.

You’ll see the new design roll out across our products soon. Hope you enjoy it!

Google has changed a lot over the past 17 years—from the range of our products to the evolution of their look and feel. And today we’re changing things up once again.

by Google Blogs ( at September 01, 2015 10:11 AM

Ferry Boender

Ansible-cmdb v1.4: a host overview generator for ansible-managed hosts

Ansible-cmdb takes the output of Ansible's setup module and converts it into a static HTML overview page containing system configuration information. It supports multiple templates and extending information gathered by Ansible with custom data.

You can visit the Github repo, or view an example output here.

This is the v1.4 release of ansible-cmdb, which brings a bunch of bug fixes and some new features:

  • Support for host inventory patterns (e.g. foo[01:04]
  • Support for 'vars' and 'children' groups.
  • Support passing a directory to the -i param, in which case all the files in that directory are interpreted as one big hosts file.
  • Support for the use of local jquery files instead of via a CDN. Allows you to view the hosts overview in your browser using file://. See for info on how to enable it (hint: ansible-cmdb -p local_js=1).
  • Add -f/–fact-caching flag for compatibility with fact_caching=jsonfile fact dirs (Rowin Andruscavage).
  • The search box in the html_fancy template is now automatically focussed.
  • Show memory to one decimal to avoid "0g" in low-mem hosts.
  • Templates can now receive parameters via the -p option.
  • Strip ports from hostnames scanned from the host inventory file.
  • Various fixes in the documentation.
  • Fixes for Solaris output (memory and disk).

I would like to extend my gratitude to the following contributors:

  • Sebastian Gumprich
  • Rowin Andruscavage
  • Cory Wagner
  • Jeff Palmer
  • Sven Schliesing

If you've got any questions, bug reports or whatever, be sure to open a new issue on Github!

by admin at September 01, 2015 07:07 AM

mypapit gnu/linux blog

Download Wordlist for dictionary attack

Crackstation wordlist is one of the most (if not the most) comprehensive wordlist which can be used for the purpose of dictionary -attack on passwords.

The wordlist comes in two flavors:

  1. Full wordlist (GZIP-compressed (level 9). 4.2 GiB compressed. 15 GiB uncompressed)
  2. Human-password only wordlist (GZIP-compressed. 247 MiB compressed. 684 MiB uncompressed)

Personally, I’ve already downloaded the full wordlist via torrent, and tested it against few PDF files (using pdfcrack) and UNIX password cracking (using John), all my test cases were successful. In my opinion, the wordlist is comprehensive for my need.

Since it looked like it took a significant effort to compile this wordlist, I rather advocate those who are interested to donate/buy the wordlist from:

by Mohammad Hafiz mypapit Ismail at September 01, 2015 01:14 AM

August 31, 2015

Ubuntu Geek

FISH – A smart and user-friendly command line shell for Linux

Sponsored Link
The friendly interactive shell (FISH). fish is a user friendly command line shell intended mostly for interactive use. A shell is a program used to execute other programs.
Read the rest of FISH – A smart and user-friendly command line shell for Linux (200 words)

© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to
Post tags:

Related posts

by ruchi at August 31, 2015 11:56 PM

Yellow Bricks

Virtual SAN beta coming up with dedupe and erasure coding!

Advertise here with BSA

Something I am personally very excited about is the fact that there is a beta coming up soon for an upcoming release of Virtual SAN. This beta is all about space efficiency and in particular will contain two new great features which I am sure all of you will appreciate testing:

  • Erasure Coding
  • Deduplication

My guess is that many people will get excited about dedupe, but personally I am also very excited about erasure coding. As it stands with VSAN today when you deploy a 50GB VM and have failures to tolerate defined as 1 you will need to have ~100GB of capacity available. With Erasure Coding the required capacity will be significantly lower. What you will be able to configure is a 3+1 or a 4+2 configuration, not unlike RAID-5 and RAID-6. This means that from a capacity stance you will need 1.3x the space of a given disk when 3+1 is used or 1.5x the space when 4+2 is used. Significant improvement over 2x when using FTT=1 in todays GA release. Do note of course that in order to achieve 3+1 or 4+2 you will need more hosts then you would normally need with FTT=1 as we will need to guarantee availability.

Dedupe is the second feature that you can test with the upcoming beta. I don’t think I really need to explain what it is. I think it is great we may have this functionality as part of VSAN at some point in the future. Deduplication will be applied on a “per disk group” basis. Of course the results of deduplication will vary, but with the various workloads we have tested we have seen up to 8x improvements in usable capacity. Again, this will highly depend on your use case and may end up being lower or higher.

And before I forget, there is another nice feature in the beta which is end-to-end checksums (not just on the device). This will protect you not only against driver and firmware bugs, anywhere on the stack,  but also bit rot on the storage devices. And yes, it will have scrubber constantly running in the background. The goal is to protect against bit rot, network problems, software and firmware issues. The checksum will use CRC32c, which utilizes special CPU instructions thanks to Intel, for the best performance. These software checksums will complement the hardware-based checksums available today. This will be enabled by default and of course will be compatible with the current and future data services. 

If you are interested in being considered for the beta (and apologies in advance that we will not be able to accommodate all requests), then you can summit your information at  


"Virtual SAN beta coming up with dedupe and erasure coding!" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at August 31, 2015 10:51 PM

What is new for Virtual SAN 6.1?

Advertise here with BSA

It is VMworld, and of course there are many announcements being doing one of which is Virtual SAN 6.1 which will come as part of vSphere 6.0 Update 1. Many new features have been added, but there are a couple which stand out if you ask me. In this post I am going to talk about what are in my opinion the key new features. Lets list them first and then discuss some of them individually.

  • Support for stretched clustering
  • Support for 2 node ROBO configurations
  • Enhanced Replication
  • Support for SMP-FT
  • New hardware options
    • Intel NVMe
    • Diablo Ultra Dimm
  • Usability enhancements
    • Disk Group Bulk Claiming
    • Disk Claiming per Tier
    • On-Disk Format Upgrade from UI
  • Health Check Plug-in shipped with vCenter Server
  • Virtual SAN Management Pack for VR Ops

When explaining the Virtual SAN architecture and concepts there is always one question that comes up, what about stretched clustering? I guess the key reason for it being the way Virtual SAN distributes objects across multiple hosts for availability reasons and people can easily see how that would work with datacenters. With Virtual SAN 6.1 we now fully supported stretched clustering. But what does that mean, what does that look like?

As you can see in the diagram above it starts with 3 failure domains, two of which will be “hosting data” and one of which will be a “witness site”. All of this is based on the Failure Domains technology that was introduced with 6.0, and those who have used it now how easy it is. Of course there are requirements when it comes to deploying in a stretched fashion and the key requirements for Virtual SAN are:

  • 5ms RTT latency max between data sites
  • 100ms RTT latency at most from data sites to witness site

Worth noting from a networking point of view is that from the data sites to the witness site there is no requirement for multicast routing and it can be across L3. On top of that the Witness can be nested ESXi, so no need to dedicate a full physical host just for witness purposes. Of course the data sites can also connect to each other over L3 if that is desired, but personally I suspect that VSAN over L2 will be a more common deployment and it is also what I would recommend. Note that between the data sites there is still a requirement for multicast.

When it comes to deploying virtual machines on a stretched cluster not much has changed. Deploy a VM, and VSAN will ensure that there is 1 copy of your data in Fault Domain A and one copy in Fault Domain B with your witness in Fault Domain C. Makes sense right? If one of the data sites fails then the other can take over. If the VM is impacted by a site failure then HA can take action… It is no rocket science and dead simple to set up. I will have a follow up post with some more specifics in a couple of weeks

Besides stretched clustering Virtual SAN 6.1 also brings a 2 node ROBO option. This is based on the same technique as the stretched clustering feature. It basically allows you to have 2 nodes in your ROBO location and a witness in a central location. The max latency (RTT) in this scenario is 500ms, which should accommodate for almost every ROBO deployment out there. Considering the low number of VMs typically in these scenarios you are usually okay as well with 1GbE networking in the ROBO location, which further reduces the cost.

When it comes to disaster recovery work has also been done to reduce the recovery point objective (RPO) for vSphere Replication. By default this is 15 minutes, but for Virtual SAN this has now been certified for 5 minutes. Just imagine combining this with a stretched cluster, that would be a great disaster avoidance and disaster recovery solution. Sync replication between active sites and then async to where ever it needs to go.

But that is not it in terms of availability, support for SMP FT has also been added. I never expected this to be honest, but I have had many customers asking for this in the past 12 months. Other common requests I have seen is the support of these super fast flash devices like Intel NVMe and Diablo Ultra Dimm, and 6.1 delivers exactly that.

Another big focus in this release has been usability and operations. Many enhancements have been done to make life easier. I like the fact that the Health Check plugin is now included with vCenter Server and you can do things like upgrading the on-disk format straight from the UI. And of course there is the VR Ops Management Pack, which will enrich your VR Ops installation with all the details you ever need about Virtual SAN. Very very useful!

All of this making Virtual SAN 6.1 definitely a release to check out!

"What is new for Virtual SAN 6.1?" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at August 31, 2015 06:08 PM

VMworld 2015: vSphere APIs for IO Filtering update

Advertise here with BSA

I suspect that the majority of blogs this week will all be about Virtual SAN, Cloud Native Apps and EVO. If you ask me then the vSphere APIs for IO Filtering announcements are just as important. I’ve written about VAIO before, in a way, and it was first released in vSphere 6.0 and opened to a select group of partners. For those who don’t know what it is, lets recap, the vSphere APIs for IO Filtering is a framework which enables VMware partners to develop data services for vSphere in a fully supported fashion. VMware worked closely with EMC and Sandisk during the design and development phase to ensure that VAIO would deliver what partners would require it to deliver.

These data services can be applied to on a VM or VMDK granular level and can be literally anything by simply attaching a policy to your VM or VMDK. In this first official release however you will see two key use cases for VAIO though:

  1. Caching
  2. Replication

The great thing about VAIO if you ask me is that it is an ESXi user space level API, which over time will make it possible for all the various data services providers (like Atlantis, Infinio etc) who now have a “virtual appliance” based solution to move in to ESXi and simplify their customers environment by removing that additional layer. (To be technically accurate, VAIO APIs are all user level APIs, the filters are all running in user space, only a part of the VAIO framework runs inside the kernel itself.) On top of that, as it is implemented on the “right” layer it will be supported for VMFS (FC/iSCSI/FCoE etc), NFS, VVols and VSAN based infrastructures. The below diagram shows where it sits.

VAIO software services are implemented before the IO is directed to any physical device and does not interfere with normal disk IO. In order to use VAIO you will need to use vSphere 6.0 Update 1. On top of that of course you will need to procure a solution from one of the VMware partners who are certified for it, VMware provides the framework – partners provide the data services!

As far as I know the first two to market will be EMC and Sandisk. Other partners who are working on VAIO based solutions and you can expect to see release something are Actifio, Primaryio, Samsung, HGST and more. I am hoping to be able to catch up with one or two of them this week or over the course of the next week so I can discuss it a bit more in detail.

"VMworld 2015: vSphere APIs for IO Filtering update" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at August 31, 2015 02:49 PM

Rands in Repose

4am Panic

It’s a definitive characteristic of the people I work with that they sign up for too much. They’re optimists. They believe they can do anything. They’re eternally growing. That’s the poetry, here’s the reality.

There are two paths for these eager optimists. The first path is the individual who is capable of both signing up for everything and also completing everything. These unicorns exist and I am fascinated by them because I am so completely on the second path. It’s on this path where I sign up for too much, which I invariably learn three weeks later when my eyes pop open for my 4am Panic.

The 4am Panic is achieved when the work I need to complete exceeds my mental capacity to consider it. Something annoyingly biologically chemical is triggered at 4am where apparently I must uselessly consider all of my current work on my plate for no productive reason at all. Just stare at the ceiling and fret until I fall back to sleep.

You might not have the 4am panic, but you know the state because you’ve probably been there. It’s the state of constant reaction. It’s when you start blocking time off on your calendar just to keep up. You reinvent your productivity system, you write list after list after list, and you sleep poorly.

It’s worth taking some time to think about how you got here, but that’s not the point of this piece. I have simple advice and, well, it involves two more lists.

The First List

We’re going to write two lists and my request is that you don’t read about the second list until you’ve completed the first. I suspect if you understand the full exercise right now that your first list will be skewed and biased, so when I say “go”, stop reading, grab your favorite pen, and write the first list.

This is a list of the impossible things on your plate right now. Now, they aren’t actually impossible, but they’re big rocks. They’re sitting in your inbox or in your favorite productivity tool and each time you see them, you’re brain freezes and thinks, “That’s big – skip it for now.”

Now, you can skip it a few times, but at some point you start to take some type of small credibility hit because forward progress isn’t being made. When you take that hit and multiply it by the fact there are four other impossible tasks on your list, you’ve got a date with the ceiling at 4am.

Make a list of the impossible tasks. The big rocks. Everything that is weighing on you. Don’t worry, this is just for you.


The Second List

I’m really curious about the size of your impossible list. Three? Twenty? Any cathartic moments as you wrote? Revelations? It’s not the point of this piece, but one of the reasons to make a simple list is to get it out of your unstructured and emotional head and onto structured and readable paper.

Ok, turn that piece of paper over. We’re going to make a second list and, as much as possible, I want you to forget about the first list. List the people who merit your belief in their reliability, truth, ability, or strength. We’re talking about work here so I’m assuming these are co-workers, but don’t limit yourself to your immediate team or leads. Who is everyone in the company that you fully trust?

Next, for each person on your list. I want you write why you trust this person. “Bruce. I trust Bruce to consider a problem from every angle. Hannah. I trust Hannah to always provide realistic dates. Marty. I trust Marty to always put his team before himself.”


How to Sleep Well

This second list is the reason you’re reading this piece. Earlier this year, I woke up at 4am for some ceiling time. The impossible was swirling and as I settled into my fretting, I realized I was deeply tired of the unstructured worry, so I rolled out of bed and I wrote the second list. 22 people that I completely trusted in one way or another. I was struck both by some of the names on the list and how I trusted them. If you’ve blown through both exercises and are just reading this piece, you can still have a rewarding moment of understanding that you are surrounded by a diverse set of trustworthy people.

I stared at the list for a few moments and realized I had a stunning amount of unrealized capacity around me. It was this moment which was behind the quote I recently gave to TechCrunch:

“My job is to my get myself out of a job. I’m aggressively pushing things I think I could be really good at and should actually maybe own to someone else who’s gonna get a B at it, but they’re gonna get the opportunity to go do that … [I’m always asking] does this legit need to be on my list. Should I be doing this, or is this something I can give to someone else and they should be actually going and doing it? That’s one of my principles, to get myself out-of-the-way. Ideally there’s some morning where I get up and have my coffee and there’s absolutely nothing to do, everything else has been delegated.”

This is your task. Take your first list and see who on the second list can help out. There’s a reason you signed up for all these impossible tasks and big rocks. You were coming from an enthusiastic and optimistic place, but if you’re a leader of humans, the right answer might be to ask for help. The right answer might be to give the task to someone else who might not do as good a job, but who will learn more than you.

You might think that this is a long way to say “Delegation Matters!”, but there are other lessons. Your brain protects you in strange ways. Enthusiasm might not be strategic. You’re underestimating the people you trust.

I didn’t write or match to my first list until the next day. All I did at 4am was consider the list of the people I trusted and what I trusted them to do. After a few minutes, I went back to bed and slept all night.

by rands at August 31, 2015 02:37 PM

Google Blog

Android Wear now works with iPhones

Editor's note: As of September 2, you can check out new watches from Huawei, ASUS, and Motorola that all work with iPhones.

When you wear something every day, you want to be sure it really works for you. That’s why Android Wear offers countless design choices, so you can find the watch that fits your style. Want a round watch with a more classic look? Feel like a new watch band? How about changing things up every day with watch faces from artists and designers? With Android Wear you can do all of that. And now, Android Wear watches work with iPhones.

Android Wear for iOS is rolling out today. Just pair your iPhone (iPhone 5, 5c, 5s, 6, or 6 Plus running iOS 8.2+) with an Android Wear watch to bring simple and helpful information right to your wrist:

  • Get your info at a glance: Check important info like phone calls, messages, and notifications from your favorite apps. Android Wear features always-on displays, so you’ll never have to move your wrist to wake up your watch.
  • Follow your fitness: Set fitness goals, and get daily and weekly views of your progress. Your watch automatically tracks walking and running, and even measures your heart rate.
  • Save time with smart help: Receive timely tips like when to leave for appointments, current traffic info, and flight status. Just say “Ok Google” to ask questions like “Is it going to rain in London tomorrow?” or create to-dos with “Remind me to pack an umbrella.”

Today, Android Wear for iOS works with the LG Watch Urbane. All future Android Wear watches, including those from Huawei (pictured above), ASUS, and Motorola will also support iOS, so stay tuned for more.

Dr. Seuss once said: “Today you are You, that is truer than true. There is no one alive who is Youer than You.” We agree. So whoever You are, and whatever You like—Android Wear lets you wear what you want.

by Google Blogs ( at August 31, 2015 09:41 AM


How to Create a Windows 10 Flash Drive Installer on a Mac

This quick guide will show you how to create a Windows 10 “bootable” USB thumb/flash drive from within OS X.

  1. First up, you’ll need a Windows 10 .iso file. This is the exact type of file that Microsoft provides when you download Windows 10 from them. If you obtained your copy of Windows 10 elsewhere that’s fine, as long as the file is in the .iso format.
  2. The other item you’ll need is a USB usb flash/thumb drive. Depending on the version of Windows that you’re installing, you may be able use a flash drive as small as 4GB. The instructions that came with the Windows 10 .iso file you have will probably tell you. If in doubt, an 8GB drive will suffice for every version of Windows 10. Please note: this process involves completely erasing all of the data on that thumb drive, so if you have files on it – make sure to back them up first, as they will be deleted.
  3. To create a bootable (installation) Windows 10 USB flash drive we’re actually going to use the Boot Camp Utility App, even though we are not going to install Windows 10 on your Mac (at least in this tutorial). Launch it by selecting Applications then Utilities and then double-click Boot Camp Assistant
  4. boot camp assistant in Finder
    click to enlarge

  5. Click Continue to move on from the ‘Introduction’ screen.
  6. the Introduction screen of the Boot Camp Utility
    click to enlarge

  7. Make sure that Create a Windows 7 or later version install disk is selected, and Install Windows 7 or later version is NOT selected. Then click Continue.
  8. creating a bootable USB drive on your mac using boot camp assistant
    click to enlarge

  9. Now it’s time to select your Windows 10 ISO file. Click the Choose… button.
  10. selecting an iso to use to create a bootable flash drive
    click to enlarge

  11. Navigate to the Windows 10 .iso file, select it by clicking on it once, and then click the Open button.
  12. a windows 10 iso file in finder
    click to enlarge

  13. Now in the Destination Disk: section select your USB thumb drive. Make sure to select the correct one – especially if you have other USB drives plugged into your Mac. Remember, any and all files on this USB flash drive will be erased, so double-check. Then click Continue.

  14. click to enlarge

  15. A scary This drive will be erased confirmation message will appear. Since you so diligently made sure to select the correct drive in the previous step, you can click Continue with no fear :)
  16. the window to select the destination drive for your windows 10 bootable thumb drive
    click to enlarge

  17. Now the boring part. This can take a while, you may want to go grab yourself a coffee or tea.
  18. boot camp utility warning message
    click to enlarge

  19. Towards the end of the process, you’ll be prompted to enter your Mac’s password. Do so, then click the Add Helper button.
  20. osx password request window
    click to enlarge

  21. Click Quit when the final screen appears.
  22. final screen of boot camp assistant
    click to enlarge

  23. The USB flash drive will now be named WININSTALL (and is safe to eject) – you’re done!
  24. a mounted windows 10 usb thumb drive in osx

by Ross McKillop at August 31, 2015 05:22 AM

Google Webmasters

An update on CSV download scripts

With the new Search Analytics API, it's now time to gradually say goodbye to the old CSV download scripts for information on queries & rankings. We'll be turning off access to these downloads on October 20, 2015.

These download scripts have helped various sites & tools to get information on queries, impressions, clicks, and rankings over the years. However, they didn't use the new Search Analytics data, and relied on the deprecated Client Login API.

Farewell, CSV downloads, you've served us (and many webmasters!) well, but it's time to move on. We're already seeing lots of usage with the new API. Are you already doing something neat with the API? Let us know in the comments!

by Google Webmaster Central ( at August 31, 2015 05:02 AM

August 30, 2015

Yellow Bricks

Project #vGiveback

Advertise here with BSA

It is VMworld again, and just like last year VMware decided to give back to the community. No I am not talking about the virtualization community, but I am talking about charity, the great thing is that just like last year you can get VMware to give away an X amount to a charity cause of your choice (health, children, education or environment). As a friend of the VMware foundation I would like to ask ALL of you to help. So what do you need to do, what is the goal?

Well it is simple. Publish a picture on either Instagram or Twitter. Make sure the picture represents the cause and of course you need to tag it, use #vGiveBack and copy in @vmwFoundation. Your message should look like something like this:

(Causes can be: #health #children #education or #environment)

The goal is 10,000 photos between Monday, August 31st and Thursday, September 3rd, out of which a mosaic will be crated. The final image for the mosaic symbolizes our community impact, and once complete, unlocks a donation that will be divided in proportion to the causes you select.

So what am I asking?

  1. Post a picture on twitter or instagram with the right hashtags and make copy in @vmwFoundation (it doesn’t need to be in front of one of those signs by the way, it can be a different pic…)
  2. Ask all of your friends to do the same, we need to hit that number lets make this go viral!

Every little bit will help. Each tweet or instagram post will bring us one step closer to unlocking our collective impact. If you don’t have twitter or instagram, ask your wife/son/daughter/friend to post for you!

"Project #vGiveback" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at August 30, 2015 05:33 PM

Geeking with Greg

Working at Google

I joined Google a few months ago. I've wanted to work at Google for a long time. I first interviewed there back in 2003!

I've written on this blog since 2004, during Findory and beyond, but, like many blogs, posts have slowed in recent years. Unfortunately, I don't expect to be able to post much here in the coming months either.

Thanks for reading all these years. I hope you enjoyed this blog, and I hope to be able to post frequently again at some point in the future.

by Greg Linden ( at August 30, 2015 02:35 PM


Remote Management of ZFS servers with Puppet and RAD

Manuel Zach blogs about how to use Puppet and the new Solaris RAD's REST interface introduced in Solaris 11.3. Solaris RAD is really interesting if you want to manage your servers via a programmatic interface.

by milek ( at August 30, 2015 11:27 AM

August 29, 2015

Trouble with tribbles

Tribblix meets MATE

One of the things I've been working on in Tribblix is to ensure that there's a good choice of desktop options. This varies from traditional window managers (all the way back to the old awm), to modern lightweight desktop environments.

The primary desktop environment (because it's the one I use myself most of the time) is Xfce, but I've had Enlightenment available as well. Recently, I've added MATE as an additional option.

OK, here's the obligatory screenshot:

While it's not quite as retro as some of the other desktop options, MATE has a similar philosophy to Tribblix - maintaining a traditional environment in a modern context. As a continuation of GNOME 2, I find it to have a familiar look and feel, but I also find it to be much less demanding both at build and run time. In addition, it's quite happy with older hardware or with VNC.

Building MATE on Tribblix was very simple. The dependencies it has are fairly straightforward - there aren't that many, and most of them you would tend to have anyway as part of a modern system.

To give a few hints, I needed to add dconf, a modern intltool, itstool, iso-codes, libcanberra, zenity, and libxklavier. Having downloaded the source tarballs, I built packages in this order (this isn't necessarily the strict dependency order, it was simply the most convenient):
  • mate-desktop
  • mate-icon-theme
  • eom (the image viewer)
  • caja (the file manager)
  • atril (the document viewer, disable libsecret)
  • engramap (the archive manager)
  • pluma (the text editor)
  • mate-menus
  • mateweather (is pretty massive)
  • mate-panel
  • mate-session
  • marco (the window manager)
  • mate-backgrounds
  • mate-themes (from 1.8)
  • libmatekbd
  • mate-settings-daemon
  • mate-control-center
The code is pretty clean, I needed a couple of fixes but overall very little needed to be changed for illumos.

The other thing I added was the murrine gtk2 theme engine. I had been getting odd warnings from applications for a while mentioning murrine, but MATE was competent enough to give me a meaningful warning.

I've been pretty impressed with MATE, it's a worthy addition to the available desktop environments, with a good philosophy and a clean implementation.

by Peter Tribble ( at August 29, 2015 11:27 AM

Aaron Johnson

Links: 8-28-2015

by ajohnson at August 29, 2015 06:30 AM

August 28, 2015

Ubuntu Geek

ATOM – A hackable text editor

Atom is a text editor that's modern, approachable, yet hackable to the core—a tool you can customize to do anything but also use productively without ever touching a config file.
Read the rest of ATOM – A hackable text editor (140 words)

© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to
Post tags:

Related posts

by ruchi at August 28, 2015 11:02 PM

Evaggelos Balaskas


This is a list with podcasts I listen on a regular base

Tag(s): podcast

August 28, 2015 08:29 PM

Administered by Joe. Content copyright by their respective authors.