Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

November 25, 2015

Everything Sysadmin

Why I don't care that Dell installs Rogue Certificates On Laptops

In recent weeks Dell has been found to have installed rogue certificates on laptops they sell. Not once, but twice. The security ramifications of this are grim. Such a laptop can have its SSL-encrypted connections sniffed quite easily. Dell has responded by providing uninstall instructions and an application that will remove the cert. They've apologized and that's fine... everyone makes mistakes, don't let it happen again. You can read about the initial problem in "Dell Accused of Installing 'Superfish-Like' Rogue Certificates On Laptops" and the re-occurance in "Second Root Cert-Private Key Pair Found On Dell Computer"

And here is why I don't care.

November 25, 2015 06:00 PM

We forget how big "big" is

Talk with any data-scientist and they'll rant about how they hate the phrase "big data". Odds are they'll mention a story like the following:

My employer came to me and said we want to do some 'big data' work, so we're hiring a consultant to build a Hadoop cluster. So I asked, "How much data do you have?" and he replied, "Oh, we never really measured. But it's big. Really big! BIIIIG!!

Of course I did some back of the envelope calculations and replied, "You don't need Hadoop. We can fit that in RAM if you buy a big enough Dell." he didn't believe me. So I went to and showed him a server that could support twice that amount, for less than the cost of the consultant.

We also don't seem to appreciate just how fast computers have gotten.

November 25, 2015 03:00 PM

Geeking with Greg

Quick links

What has caught my attention lately:
  • Tog (of the famous Tog on Interface) says Apple has lost its way on design: "Apple is destroying design. Worse, it is revitalizing the old belief that design is only about making things look pretty. No, not so! Design is a way of thinking, of determining people’s true, underlying needs, and then delivering products and services that help them." ([1] [2])

  • Good advice on adding features to a product: "'Great or Dead', as in, if we can't make a feature great, it should be killed off." ([1])

  • Great data on smartphone and tablet ownership. Sometimes it's hard to remember that only five years ago most people didn't have smartphones. ([1])

  • Advice for anyone thinking of doing a startup. Here's the conclusion: "So all you need is a great idea, a great team, a great product, and great execution. So easy! ;)" ([1])

  • Related, a Dilbert comic on the value of a startup idea ([1])

  • "People might think that human-level AI is close because they think AI is more magical than it actually is" ([1])

  • "VCs hate technical risk. They’re comfortable with market risk, but technical risk is really difficult for them to reconcile." ([1])

  • Google finds eliminating bad advertisements increases long-term revenue, concluding: "A focus on user satisfaction could help to reduce the ad load on the internet at large with long-term neutral, or even positive, business impact." ([1] [2])

  • "Crappy ad experiences are behind the uptick in ad-blocking tools" ([1])

  • On filter bubbles, a new study finds algorithms yield more diversity of content than people choosing news themselves ([1] [2] [3])

  • Facebook data center fun: "The inclusion of 480 4 TB drives drove the weight to over 1,100 kg, effectively crushing the rubber wheels." ([1])

  • Great data on who uses which social networks ([1])

  • "One of the great mysteries of the tech industry in recent years has been the seeming disinterest of Google, which is now called Alphabet, in competing with Amazon Web Services for corporate customers." ([1])

  • "Maybe part of AWS value prop is the outsourcing of outages: when half the net is offline, any individual down site doesn't look as bad." ([1])

  • "87% of Android devices are vulnerable to attack by malicious apps ... because manufacturers have not provided regular security updates" ([1])

  • Fun maps showing where tourists take photos compared to locals ([1] [2] [3])

  • Multiple camera lenses, an idea soon coming to mobile phones too? ([1])

  • Another interesting camera technology: ""17 different wavelengths ... software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing" ([1])

  • And another: "Take a short image sequence while slightly moving the camera ... to recover the desired background scene as if the visual obstructions were not there" ([1])

  • Useful to know: "Survey results are mostly unaffected when the non-Web respondents are left out." ([1])

  • Surprising finding, meal worms can thrive just eating styrofoam: "the larvae lived as well as those fed with a normal diet (bran) over a period of 1 month" ([1])

  • Autonomous drone for better-than-GoPro filming? ([1] [2])

  • "We see people turning onto, and then driving on, the wrong side of the road a lot ... Drivers do very silly things when they realize they’re about to miss their turn ... Routinely see people weaving in and out of their lanes; we’ve spotted people reading books, and even one [driver] playing a trumpet." ([1])

  • A fun and cool collection of messed up images out of Apple maps. It's almost art. ([1])

  • SMBC comic, also applies to AI ([1])

by Greg Linden ( at November 25, 2015 07:38 AM


{git, hg} Custom Log Output

The standard log output for both Git and Mercurial is a bit verbose for my liking. I keep my terminal at ~50 lines, which results in only getting about 8 to 10 log entries depending on how verbose the commit was. This isn't a big deal if you are just i...

by Scott Hebert at November 25, 2015 06:00 AM

November 24, 2015

Trouble with tribbles

Replacing SunSSH with OpenSSH in Tribblix

I recently did some work to replace the old SSH implementation used by Tribblix, which was the old SunSSH from illumos, with OpenSSH.

This was always on the list - our SunSSH implementation was decrepit and unmaintained, and there seemed little point in general in maintaining our own version.

The need to replace has become more urgent recently, as the mainstream SSH implementations have drifted to the point that we're no longer compatible - to the point that our implementation will not interoperate at all with that on modern Linux distributions with the default settings.

As I've been doing a bit of work with some of those modern Linux distributions, being unable to connect to them was a bit of a pain in the neck.

Other illumos distributions such as OmniOS and SmartOS have also recently been making the switch.

Then there was a proposal to work on the SunSSH implementation so that it was mediated - allowing you to install both SunSSH and OpenSSH and dynamically switch between them to ease the transition. Personally, I couldn't see the point - it seemed to me much easier to simply nuke SunSSH, especially as some distros had already made or were in the process of making the transition. But I digress.

If you look at OmniOS, SmartOS, or OpenIndiana, they have a number of patches. In some cases, a lot of patches to bring OpenSSH more in line with old SunSSH.

I studied these at some length, looked at them, and largely rejected them. There are a couple of reasons for this:

  • In Tribblix, I have a philosophy of making minimal modifications to upstream projects. I might apply patches to make software build, or when replacing older components so that I don't break binary compatibility, but in general what I ship is as close to what you would get if you did './configure --prefix=/usr; make ; make install' as I can make it.
  • Some of the fixes were for functionality that I don't use, probably won't use, and have no way of testing. So blindly applying patches and hoping that what I produce still works, and doesn't arbitrarily break something else, isn't appealing. Unfortunately all the gssapi stuff falls into this bracket.
One thing that might change this in the future, and something we've discussed a little, is to have something like Joyent's illumos-extra brought up to a state where it can be used as a common baseline across all illumos distributions. It's a bit too specific to SmartOS right now, so won't work for me out of the box, and it's a little unfortunate that I've just about reimplemented all the same things for Tribblix myself.

So what I ship is almost vanilla OpenSSH. The modifications I have made are fairly few:

It's split into the same packages (3 of them) along just about the same boundaries as before. This is so that you don't accidentally mix bits of SunSSH with the new OpenSSH build.

The server has
KexAlgorithms +diffie-hellman-group1-sha1
added to /etc/ssh/sshd_config to allow connections from older SunSSH clients.

The client has
PubkeyAcceptedKeyTypes +ssh-dss
added to /etc/ssh/ssh_config so that it will allow you to send DSA keys, for users who still have just DSA keys.

Now, I'm not 100% happy about the fact that I might have broken something that SunSSH might have done, but having a working SSH that will interoperate with all the machines I need to talk to outweighs any minor disadvantages.

by Peter Tribble ( at November 24, 2015 09:32 PM

Le blog de Carl Chenet

db2twitter: Twitter out of the browser

You have a database, a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter. db2twitter on Github (star it on Github if you like it :) ) Official documentation of db2twitter on readthedocs db2twitter

by Carl Chenet at November 24, 2015 06:00 PM

Aaron Johnson

November 23, 2015

Everything Sysadmin

What JJ Abrams just revealed about Star Wars

Last night (Saturday, Nov 21) I attended a fundraiser for the Montclair Film Festival where (I kid you not) for 90 minutes we watched Stephen Colbert interview J.J. Abrams.

What I learned:

  • He finished mixing The Force Awakens earlier that day. 2:30am California time. He then spent all day traveling to Newark, New Jersey for the event.
  • After working on it for so long, he's sooooo ready to get it in the theater. "The truth is working on this movie for nearly three years, it has been like living with the greatest roommate in history for too long. It's time for him to get his own place. It's been great and I can't tell you how much I want him to get out into the world and meet other people because we know each other really well. But really, 'Star Wars' is bigger than all of us. So I'm thrilled beyond words (to be involved) and terrified more than I can say."
  • When they played the Force Awakens trailer, J.J. said he had seen it before, but this was the first time he saw it with a live audience.
  • J.J. was influenced at an early age by "The Force" as being a non-denominational power for good.
  • Stephen Colbert saw the original Star Wars 3 weeks early thanks to a contest. He gloated that he's been excited about Star Wars for 3 weeks longer than anyone here.
  • Jennifer Garner worked for Colbert as a nanny when she was starting out in acting and needed the money.
  • Stephen Colbert auditioned for J.J.'s first film but didn't get the part. The script was called Filofax but was called "Taking Care of Business" when it was released. Colbert said he remembered auditioning for Filofax and then seeing TCoB in the theater and thinking, "Gosh this seems a lot like Filofax!"
  • J.J. acted in a film. He had a cameo in "Six Degrees of Separation". They showed a clip off YouTube.
  • While filming the pilot for "Lost" the network changed presidents. The new one wasn't very confident in the new series and asked them to film an ending for the pilot that would permit them to show it as a made-for-TV movie instead. He pretended to "forget" the request and the president never brought it up again.

The fundraiser was a total win. 2800 people were there (JJ said "about 2700 more than I expected"). If you are in the NY/NJ area, I highly recommend you follow them on Facebook and check out the next film festival on April 29 to May 8, 2016.

November 23, 2015 01:15 AM

Ubuntu Geek

Step By Step Ubuntu 15.10 (Wily Werewolf) LAMP Server Setup

Sponsored Link

In around 15 minutes, the time it takes to install Ubuntu Server Edition, you can have a LAMP (Linux, Apache, MySQL and PHP) server up and ready to go. This feature, exclusive to Ubuntu Server Edition, is available at the time of installation.The LAMP option means you don’t have to install and integrate each of the four separate LAMP components, a process which can take hours and requires someone who is skilled in the installation and configuration of the individual applications. Instead, you get increased security, reduced time-to-install, and reduced risk of misconfiguration, all of which results in a lower cost of ownership.Currently this installation provide PostgreSQL database, Mail Server, Open SSH Server,Samba File Server, Print Server, Tomcat Java Server,Virtual Machine Host,Manual Package selection,LAMP and DNS options for pre-configured installations, easing the deployment of common server configurations.
Read the rest of Step By Step Ubuntu 15.10 (Wily Werewolf) LAMP Server Setup (652 words)

© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to
Post tags: , , ,

Related posts

by ruchi at November 23, 2015 12:31 AM

November 22, 2015

Trouble with tribbles

On Keeping Your Stuff to Yourself

One of the fundamental principles of OmniOS - and indeed probably its defining characteristic - is KYSTY, or Keep Your Stuff* To Yourself.

(*um, whatever.)

This isn't anything new. I've expressed similar opinions in the past. To reiterate - any software that is critical for the success of your business/project/infrastructure/whatever should be directly under your control, rather than being completely at the whim of some external entity (in this case, your OS supplier).

We can flesh this out a bit. The software on a system will fall, generally, into 3 categories:

  1. The operating system, the stuff required for the system to boot and run reliably
  2. Your application, and its dependencies
  3. General utilities

As an aside, there are more modern takes on the above problem: with Docker, you bundle the operating system with your application; with unikernels you just link whatever you need from classes 1 and 2 into your application. Problem solved - or swept under the carpet, rather.

Looking at the above, OmniOS will only ship software in class 1, leaving the rest to the end user. SmartOS is a bit of a hybrid - it likes to hide everything in class 1 from you and relies on pkgsrc to supply classes 2 and 3, and the bits of class 1 that you might need.

Most (of the major) Linux distributions ship classes 1, 2, and 3, often in some crazily interdependent mess that you have to spend ages unpicking. The problem being that you need to work extra hard to ensure your own build doesn't accidentally acquire a dependency on some system component (or that you build somehow reads a system configuration file).

Generally missing from discussions is that class 3 - the general utilities. Stuff that you could really do with an instance of to make your life easier, but where you don't really care about the specifics of.

For example, it helps to have a copy of the gnu userland around. Way too much source out there needs GNU tar to unpack, or GNU make to build, or assumes various things about the userland that are only true of the GNU tools. (Sometimes, the GNU tools aren't just a randomly incompatible implementation, occasionally have capabilities that are missing from standard tools - like in-place editing in gsed.)

Or a reasonably complete suite of compression utilities. More accurately, uncompression, so that you have a pretty good chance of being able to unpack some arbitrary format that people have decided to use.

Then there are generic runtimes. There's an awful lot of python or perl out there, and sometimes the most convenient way to get a job done is to put together a small script or even a one-liner. So while you don't really care about the precise details, having copies of the appropriate runtimes (and you might add java, erlang, ruby, node, or whatever to that list) really helps for the occasions when you just want to put together a quick throwaway component. Again, if your business-critical application stack requires that runtime, you maintain your own, with whatever modules you need.

There might also be a need for basic graphics. You might not want or need a desktop, but something is linked against X11 anyway. (For example, java was mistakenly linked against X11 for font handling, even in headless mode - a bug recently fixed.) Even if it's not X11, applications might use common code such as cairo or pango for drawing. Or they might need to read or write image formats for web display.

So the chances are that you might pull in a very large code surface, just for convenience. Certainly I've spent a lot of time building 3rd-party libraries and applications on OmniOS that were included as standard pretty much everywhere else.

In Tribblix, I've attempted to build and package software cognizant of the above limitations. So I supply as wide a range of software in class 3 as I can - this is driven by my own needs and interests, as a rule, but over time it's increasingly complete. I do supply application stacks, but these are built to be in a separate location, and are kept at arms length from the rest of the system. This then integrated with Zones in a standardized zone architecture in a way that can be managed by zap. My intention here is not necessarily to supply the building blocks that can be used by users, but to provide the whole application, fully configured and ready to go.

by Peter Tribble ( at November 22, 2015 02:20 PM

Server Density

November 20, 2015


Implementing 'git show' in Mercurial

One of my frequently used git commands is 'git show <rev>'. As in, "show me what the heck this guy did here." Unfortunately, Mercurial doesn't have the same command, but it's easy enough to implement it using an alias in your .hgrc. The command y...

by Scott Hebert at November 20, 2015 02:00 PM

Yellow Bricks

Virtual SAN: Generic storage platform of the future

Advertise here with BSA

Over the last couple of weeks I have been presenting at various events on the topic of Virtual SAN. One of the sections in my deck is a bit about the future of Virtual SAN and where it is heading towards. Someone tweeted one of the diagrams in my slides recently which got picked up by Christian Mohn who provided his thoughts on the diagram and what it may mean for the future. I figured I would share my story behind this slide, which is actually a new version of a slide that was originally presented by Christos and also discussed in one of his blog posts. First, lets start with the diagram:

If you look at VSAN today and ask people what VSAN is today then most will answer: a “virtual machine” storage system. But VSAN to me is much more than that. VSAN is a generic object storage platform, which today is used to primarily store virtual machines. But these objects can be anything if you ask me, and on top of that can be presented as anything.

So what is it VMware is working towards, what is our vision? VSAN was designed to serve as a generic object storage platform from the start, and is being extended to serve as a platform to different types of data by providing an abstraction layer. In the diagram you see “REST” and “FILE” and things like Mesos and Docker, it isn’t difficult to imagine what types of workloads we envision to run on top of VSAN and what types of access you have to resources managed by VSAN. This could be through a native Rest API that is part of the platform which can be used by developers directly to store their objects on or through the use of a specific driver for direct “block” access for instance.

Combine that with the prototype of the distributed filesystem which was demonstrated at VMworld and I think it is fair to say that the possibilities are endless. VSAN isn’t just a storage system for virtual machines, it is a generic object based storage platform which leverages local resources and will be able to share those in a clustered fashion in any shape or form in the future. Christian definitely had a point, in which shape or form all of this will be delivered has to be seen though, this is not something I can (or want) to speculate on. Whether that is through Photon Platform, or something else is in my opinion besides the point. Even today VSAN has no dependencies on vCenter Server and can be fully configured, managed and monitoring using the APIs and/or the different command-line interface options we offer. Agility and choice have always been the key design principles for the platform.

Where things will go exactly and when this will happen is still to be seen. But if you ask me, exciting times are ahead for sure, and I can’t wait to see how everything plays out.


"Virtual SAN: Generic storage platform of the future" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at November 20, 2015 01:49 PM

Aaron Johnson

Links: 11-19-2015

by ajohnson at November 20, 2015 06:30 AM

November 19, 2015


How to Delete or “Forget” a WiFi Network in Windows 10

This brief guide will take you step by step through the process of removing (also known as “forgetting”) a Wireless Network in Windows 10.

Two of the most common reasons for deleting the connection settings of a wireless network that you’ve previously connected to are:

  1. Troubleshooting a connection that no longer works.
  2. To stop your device from automatically connecting to networks you don’t use regularly (particularly helpful for laptop and tablet users).

Here are the quick steps to take in order to remove a wireless network from Windows 10:

  1. Click the “Start” button and select Settings
  2. Select Network & Internet
  3. Select Wi-Fi from the column on the left side of the window. From the Wi-Fi panel now displayed on the right side of the window, scroll down to and select/click Manage Wi-Fi settings
  4. Scroll down to the Manage known networks section. From the list of all of your “known” (saved) Wi-Fi networks, select the one you want to delete by clicking on it once. Once it’s been selected, a Forget button will appear. Click that button to remove the associated Wireless Network.
  5. Select each of the other networks you want to delete (if any) and repeat the “click Forget” process. That’s it!

by Ross McKillop at November 19, 2015 08:37 PM

Server Density

Server Alerts: a Step by Step Look Behind the Scenes

Alert processing is a key part of the server monitoring journey.

From triggering webhooks to sending emails and SMS, this is where all the magic happens. In this article we will explore what takes place in those crucial few milliseconds, from agent data landing on our servers to the final alert appearing on your device.

But before we dive into the inner workings of alerting, we’d go amiss if we didn’t touch on the underlying technology that makes it all possible.

We’re all about Python

At its core, Server Density is a Python company.

Having an experienced Python team is only a part of it. For every performance problem we’ve faced in our infrastructure over the years, we’ve always managed to find a Python way to solve it.

There is so much to like about Python.

We love its syntax and the way it forces us to write readable code. By following the PEP-8 spec we ensure the code is easy to read and maintain. We also appreciate Python’s industry-leading unit testing capabilities, as they offer invaluable gains to our coding efforts. And while we don’t expect 100% testing coverage, we strive to be as close as we can. Python offers some simple and scalable functionalities towards that.

Another key feature is simplicity. From prototyping testing scripts to proof of concept APIs, Python provides numerous small wins and speeds up our workflow. Testing new ideas and trying new approaches is much quicker with Python, compared to other languages.

Last but not least, Python comes with “batteries included”. The vast amount of available modules (they do everything you can imagine), make Python a truly compelling platform for us.

Our server alerts stack

As you can see, our stack is not 100% Python. That said, all our backend developments are Python based. Here is a comprehensive list of the technologies we use:

Now, let’s take a behind-the-scenes look at the alerts processing workflow.

1.Entering the Server Density Network

The agent only ever sends data over HTTPS which means no special protocols or firewall rules are used. It also means the data is encrypted in transit.

It all starts when the JSON payload (a bundle of key metrics the user has chosen to monitor) enters the Cloudflare network. It is then proxied to Server Density and travels via accelerated transit to our Softlayer POP. Using an anycast routed global IP, the payload then hits our Nginx load balancers. Those load balancers are the only point of entry to the entire Server Density network.

2. Asynchronous goodness

Once routed by the load balancers, the payload enters into a Tornado cluster (4 bare-metal servers comprising 1 tornado instance for each of its 8 cores) for processing. We use the kafka-python library to implement the producer, as part of this cluster. This Tornado app is responsible for:

  • Data validation.
  • Statistics collection.
  • Basic data transformation.
  • Queuing payloads to kafka to prepare them for step 3 below.

3. Payload processing

Our payload processing starts with a cluster of servers running Apache Storm. This cluster is running one single topology (a graph of spouts and bolts that are connected with stream groupings), which is where all the key stuff happens.

While Apache Storm is a Java based solution, all our code is using Python. To do this, we use the multi-lang feature offered by Apache Storm. This allows us to use some special Java based Spouts and Bolts which execute Python scripts with all our code. Those are long running processes which communicate over stdout and stdin following the multi-lang protocol defined by Apache Storm.

The cluster communication is done using Zookeeper (the coordination transport) so the output of one process may automatically end up on the process of another node.

At Server Density we have split up the processing effort into isolated steps, each implemented as an Apache bolt. This way we are able to parallelise work as much as possible. It also lets us keep our current internal SLA of 150ms for a full payload process cycle.

4. Kafka consumer

Here we use the standard KafkaSpout component from Apache Storm. It’s the only part of the topology that is not using a Python based implementation. What it does is connect to our Kafka cluster and inject the next payload into our Apache Storm topology, ready to be processed.

5. Enriching our payloads

The payload also needs some data from our database. This information is used to figure out some crucial things, like what alerts to trigger. Specialized bolts gather this information from our databases, attach it to the payload and emit it, so it can be used later in other bolts.

At this point we also verify that the payload is for an active account and an active device. If it’s a new device, we check the quota of the account to decide whether we need to discard it (because we cannot handle new devices on that account), or carry on processing (and increase the account’s quota usage).

We also verify that the provided agentKey is valid for the account it was intended for. If not, we discard the payload.

6. Storing everything in metrics

Each payload needs to be split up in smaller pieces and normalized in order to be stored in our metrics cluster. We group the metrics and generate a JSON snapshot every minute. That snapshot lasts five days. We also store metrics in an aggregated data format once every hour. That’s the permanent format we keep in our time series database.

7. Alert processing

In this step we match the values of the payload against the alert thresholds defined for any given device. If there is a wait time set, the alert starts the counter and waits for subsequent payloads to check for its expiration.

When the counter expires (or if there was no wait value to begin with), we go ahead and emit all the necessary data to the notification bolt. That way, alerts can be dispatched to users based on the preferences for that particular alert.

8. Notifications

Once we’ve decided that a particular payload (or absence of it) has triggered an alert, one of our bolts will calculate which notifications need to be triggered. Then we’ll send one http request per notification to our notifications server, another tornado cluster (we will expand on the inner workings of this in a future post. Stay tuned).


Everything happens in an instant. Agents installed on 80,000 servers around the world send billions of metrics to our servers. We rely on Python (and other technologies) to keep the trains running, and so far we haven’t been disappointed.

We hope this post has provided some clarity on the various moving parts behind our alerting logic. We’d love to hear from you. How do you use Python in your mission critical apps?

The post Server Alerts: a Step by Step Look Behind the Scenes appeared first on Server Density Blog.

by David Mytton at November 19, 2015 02:33 PM

Google Webmasters

Updating Our Search Quality Rating Guidelines

Developing algorithmic changes to search involves a process of experimentation. Part of that experimentation is having evaluators—people who assess the quality of Google’s search results—give us feedback on our experiments. Ratings from evaluators do not determine individual site rankings, but are used help us understand our experiments. The evaluators base their ratings on guidelines we give them; the guidelines reflect what Google thinks search users want.

In 2013, we published our human rating guidelines to provide transparency on how Google works and to help webmasters understand what Google looks for in web pages. Since that time, a lot has changed: notably, more people have smartphones than ever before and more searches are done on mobile devices today than on computers.

We often make changes to the guidelines as our understanding of what users wants evolves, but we haven’t shared an update publicly since then. However, we recently completed a major revision of our rater guidelines to adapt to this mobile world, recognizing that people use search differently when they carry internet-connected devices with them all the time. You can find that update here (PDF).

This is not the final version of our rater guidelines. The guidelines will continue to evolve as search, and how people use it, changes. We won’t be updating the public document with every change, but we will try to publish big changes to the guidelines periodically.

We expect our phones and other devices to do a lot, and we want Google to continue giving users the answers they're looking for—fast!

by Google Webmaster Central ( at November 19, 2015 11:26 AM

Yellow Bricks

Online Technology Forum – Wednesday the 25th – Free!

Advertise here with BSA

This week I flew to Staines (that is where VMware’s European HQ is these days) to record a video for the Online Technology Forum. For those who don’t know what it is, it is similar to a vForum but in this case you don’t even need to get out of your comfortable chair… you can attend it from home / the office. My session was all about Software Defines Storage, where we are today and where we will be tomorrow. In my session I will discuss things like VSAN, VVols and VAIO. Primarily around what was announced at VMworld, but also with a hint of what is coming for all three of these in the future.

Of course it isn’t just my session. During the Online Technology Forum, which is held on Wednesday the 25th of November, there are many great sessions held by folks like Joe Baguley (keynote), Paul Nothard (vSphere), Mike Laverick (EVO:RAIL), Lee Dilworth (VVol/VSAN) and folks like Robbie Jerrom who is going to tell you all about Cloud Native Apps and those container thingies. And that is not it, Automation / Orchestration / EUC / Networking etc… All part of the agenda.

On top of that, during the sessions you have the ability to ask your questions to a panel of experts who will answer them live online. After my session there also is a live Q&A panel where you can ask your questions directly to one of the experts one the panel (I am one of them, and I believe Joe Baguley is the moderator.)

This event was a huge success 6 months ago, and I don’t expect anything less this time around. Make sure to sign up today and tune in next week.

"Online Technology Forum – Wednesday the 25th – Free!" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at November 19, 2015 09:25 AM

Ubuntu Geek

LNAV – Ncurses based log file viewer

The Logfile Navigator, lnav for short, is a curses-based tool for viewing and analyzing log files. The value added by lnav over text viewers / editors is that it takes advantage of any semantic information that can be gleaned from the log file, such as timestamps and log levels. Using this extra semantic information, lnav can do things like: interleaving messages from different files; generate histograms of messages over time; and providing hotkeys for navigating through the file. It is hoped that these features will allow the user to quickly and efficiently zero-in on problems.
Read the rest of LNAV – Ncurses based log file viewer (380 words)

© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to
Post tags: , , , ,

Related posts

by ruchi at November 19, 2015 12:52 AM

Administered by Joe. Content copyright by their respective authors.