Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

July 29, 2015

Google Blog

See the world in your language with Google Translate

The Google Translate app already lets you instantly visually translate printed text in seven languages. Just open the app, click on the camera, and point it at the text you need to translate—a street sign, ingredient list, instruction manual, dials on a washing machine. You'll see the text transform live on your screen into the other language. No Internet connection or cell phone data needed.

Today, we’re updating the Google Translate app again—expanding instant visual translation to 20 more languages (for a total of 27!), and making real-time voice translations a lot faster and smoother—so even more people can experience the world in their language.
Instantly translate printed text in 27 languages

We started out with seven languages—English, French, German, Italian, Portuguese, Russian and Spanish—and today we're adding 20 more. You can now translate to and from English and Bulgarian, Catalan, Croatian, Czech, Danish, Dutch, Filipino, Finnish, Hungarian, Indonesian, Lithuanian, Norwegian, Polish, Romanian, Slovak, Swedish, Turkish and Ukrainian. You can also do one-way translations from English to Hindi and Thai. (Or, try snapping a pic of the text you’d like translated—we have a total of 37 languages in camera mode.)

To try out the new languages, go to the Google Translate app, set “English” along with the language you’d like to translate, and click the camera button; you'll be prompted to download a small (~2 MB) language pack for each.

Ready to see all of these languages in action?


And how exactly did we get so many new languages running on a device with no data connection? It’s all about convolutional neural networks (whew)—geek out on that over on our Research blog.

Have a natural, smoother conversation—even with a slower mobile network
In many emerging markets, slow mobile networks can make it challenging to access many online tools - so if you live in an area with unreliable mobile networks, our other update today is for you. In addition to instant visual translation, we’ve also improved our voice conversation mode (enabling real-time translation of conversations across 32 languages), so it’s even faster and more natural on slow networks.

These updates are coming to both Android and iOS, rolling out over the next few days.

Translate Community helps us get better every day
On top of today’s updates, we’re also continuously working to improve the quality of the translations themselves and to add new languages. A year ago this week, we launched Translate Community, a place for multilingual people from anywhere in the world to provide and correct translations. Thanks to the millions of language lovers who have already pitched in—more than 100 million words so far!—we've been updating our translations for over 90 language pairs, and plan to update many more as our community grows.

We’ve still got lots of work to do: more than half of the content on the Internet is in English, but only around 20% of the world’s population speaks English. Today’s updates knock down a few more language barriers, helping you communicate better and get the information you need.


by Google Blogs (noreply@blogger.com) at July 29, 2015 07:12 AM

mypapit gnu/linux blog

Using a new Google API or Service? It might not last long…

I read an interesting analysis on the lifespan of some of the Google service and API that has been cancelled: http://www.theguardian.com/technology/2013/mar/22/google-keep-services-closed

Based on the analysis at the time of the writing, Google service has a mean lifespan of 1,459 days.

An interesting point to ponder, especially if your applications/services depends on one or more Google services.

Screenshot below:
google-service

by Mohammad Hafiz mypapit Ismail at July 29, 2015 05:19 AM

What ever happened to 1Malaysia Email – Tricubes, Pemandu, anyone??

What ever happened to 1Malaysia email, a project that said to be valued at 5.3 milions (MAVcap funding) ?

Source:

by Mohammad Hafiz mypapit Ismail at July 29, 2015 05:07 AM

July 28, 2015

Ubuntu Geek

Vivaldi – A Chromium based web browser

Sponsored Link
Vivaldi is a freeware web browser developed by Vivaldi Technologies, a company founded by former Opera Software cofounder and CEO Jon Stephenson von Tetzchner.The browser is aimed at staunch technologists, heavy Internet users, and previous Opera web browser users disgruntled by Opera's transition from the Presto layout engine to the Blink layout engine, which removed many popular features in the process.[ Vivaldi aims to revive the old, popular features of Opera 12 and introduce new, more innovative ones.
(...)
Read the rest of Vivaldi – A Chromium based web browser (388 words)


© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at July 28, 2015 11:01 PM

Yellow Bricks

Datrium finally out of stealth… Welcome Datrium DVX!

Advertise here with BSA


Before I get started, I have not been briefed by Datrium so I am also still learning as I type this and it is purely based on the somewhat limited info on their website. Datrium’s name has been in the press a couple of times as it was the company that was often associated with Diane Greene. The rumours back then were that Diane Greene was the founder and was going to take on EMC, that was just a rumour as Diane Greene is actually an investor in Datrium. Not just her of course, Datrium is also backed by NEA (Venture Capitalist) and various other well known people like Ed Bugnion, Mendel Rosenblum, Frank Slootman and Kai Li. Yes, a big buy in from some of the original VMware founders. Knowing that two of the Datrium founders (Boris Weissman and Ganesh Venkitachalam) are former VMware Principal Engineers (and old-timers) that makes sense. (Source) This morning a tweet was send out, and it seems today they are officially out of stealth.

So what is Datrium about? Well Datrium delives a new type of storage system which they call DVX. Datrium DVX is a hybrid solution comprised of host local data services and a network accessed capacity shelf called “netshelf”. I think this quote from their website says it all what their intention is… Move all functionality to the host and let the “shelf” just take care of storing bits. I included a diagram that I found on their website as it makes it more clear.

On the host, DiESL manages in-use data in massive deduplicated and compressed caches on BYO (bring your own) commodity SSDs locally, so reads don’t need a network hop. Hosts operate locally, not as a pool with other hosts.

datrium dvx

It seems that from a host perspective the data services (caching, compression, raid, cloning etc) are implemented through the installation of a VIB. So not VM/Appliance based but rather kernel based. The NetShelf is accessible via 10GbE and Datrium uses a proprietary protocol to connect to it. From a host side (ESXi) they connect locally over NFS, which means they have implemented an NFS Server within the host. The NFS connection is also terminated within the host and they included their own protocol/driver on the host to be able to connect to the NetShelf. It is a bit of an awkward architecture, or better said … at first it is difficult to wrap your head around it. This is the reason I used the word “hybrid” but maybe I should have used unique. Hybrid, not because of the mixture of flash and HDD but rather because it is a hybrid of hyper-converged / host local caching and more traditional storage but done in a truly unique way. What does that look like? Something like this I guess:

datrium dvx

So what does this look like from a storage perspective? Well each NetShelf will come with 29TB of usable capacity. Expected deduplication and compression rate for enterprise companies is between 2-6x which means you will have between 58TB and 175TB to your disposal. In order to ensure your data is high available the NetShelf is a dual controller setup with dual port drives (Which means the drives are connected to both controllers and used in an “active/standby” fashion). Each controller has NVRAM which is used for write caching, and a write will be acknowledge to the VM when it has been written to the NVRAM of both controllers. In other words, if a controller fails there should be no data loss.

Talking about availability, what if a host fails? If I read their website correctly then there is no write caching from a host point of view as it is states that each host operates independently from a caching point of view (no mirroring of writes to other hosts). This also means that all the data services need to be inline –> dedupe / compress / raid. When those actions complete the result will be stored on the NetShelf and then it is accessible by other hosts when needed. It makes me wonder what happens when DRS is enabled and a VM is migrated from one host to another. Will the read cache migrate with it to the other host? And what about very write intensive workloads, how will those perform when all data services are inline? What kind of overhead can/will it have on the host? How will it scale out? What if I need more than 1 Netshelf? Those are some of the questions that popup immediately. Considering the brain-power within Datrium I am assuming they have a simple answer to those questions… (Former VMware, Data Domain, NetApp, EMC etc) I will try to ask them these questions at VMworld or during a briefing and write a follow up.

From an operational aspect it is an interesting solution as it should lower the effort involved with managing storage almost to zero. There is the NFS connection and you have your VMs and VMDKS at the front end, at the back-end you have a blackbox or better said a shelf dedicated to storing bits. This should be dead easy to manage and deploy. It shouldn’t require a dedicated storage administrator but the VMware admin should be able to manage it. Some of you may ask, well what if I want to connect anything other than a VMware host to it? For now Datrium appears to be mainly targeting VMware environments (which makes sense considering their dna) but I guess they could implement this for various platforms in a similar fashion.

Again, I was not briefed by Datrium and I accidentally saw their tweet this morning but their solution is so intriguing I figured I would share it anyway. Hope it was useful.

Interested? More info here:

"Datrium finally out of stealth… Welcome Datrium DVX!" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

by Duncan Epping at July 28, 2015 01:20 PM

Simplehelp

How to Set Up a VPN on Your iPhone

This tutorial will guide you through the steps to required to configure and use a VPN service on your iPhone (or iPad).

the VPN symbol for the iPhone

Here are the things you’ll need before you get started:

  1. The VPN server address in addition to your username and password. Depending on how your VPN service is set up, you may also need to know your Secret key.
  2. The iPhone supports the L2TP/IPSec, PPTP and Cisco IPSec protocols without any additional Apps installed. Don’t worry if none of that makes any sense, just make sure your VPN server/service supports one of those 3 protocols. Simplehelp strongly recommends and endorses the VPN service provided by Private Internet Access, which works perfectly with the iPhone.

Let’s get started!

  1. Tap the Settings button on your iPhone (or iPad).
  2. the iphone and ipad settings button

  3. Select General
  4. the list of settings on an iphone

  5. Scroll down to the section titled VPN and tap it.
  6. the vpn section of the settings on an iphone

  7. Tap Add VPN Configuration…
  8. the add a new vpn link on an iphone

  9. For the sake of this tutorial we’re going to set up an L2TP connection, but the steps for PPTP and IPSec are similar enough that you should be able to follow along.
  10. the blank settings for a VPN on an iphone

  11. Start entering in the required information – give your VPN connection a Description (anything will do, but something descriptive is best), then enter your Server Name and your Account (which is your user name).

    The password can be saved, but if you leave it blank you’ll be prompted to enter it each time you connect to your VPN. Based on your personal preference (or in the case of time-changing passwords) you can either enter it, or leave it blank.

  12. a VPN properly configured without the password entered

  13. If your VPN Provider uses a Secret, enter it in the appropriate field. The Secret is not used by every service, so if yours doesn’t seem to have one listed (where you located your username, password and server address) – it’s likely yours doesn’t use that feature.
  14. a configured VPN on the iphone

  15. Make sure that Send All Traffic is enabled (it is by default) and then tap Save.
  16. saving a VPN on the iphone

  17. Now your VPN connection will be listed in the VPN section. Toggle the ON/OFF button to ON (green).
  18. turning on a VPN on the iphone

  19. After you see the Connected confirmation, tap on the “i” next to your newly created VPN configuration.
  20. the information icon for a vpn on the iphone

  21. It’s from here that you can always check to see how long you’ve been connected.
  22. the duration of time your iphone has been connected to a vpn

  23. Back in your Settings you’ll also see a new toggle for your VPN, so you can easily turn it on and off from here.
  24. the toggle VPN on and off button on the iphone

  25. The other visible update to your iPhone is that it will have a “VPN” symbol (see screenshot below) on the top menu when you’re connected to your VPN.
  26. the VPN symbol for the iPhone

  27. You can test out your service by visiting a “what’s my IP” website (like this one) if you’re using a VPN to hide your identity/location. As illustrated in the screenshot below, it appears as though my iPhone was being used in Texas, when in fact I was in a different state.
  28. a screenshot of the iPhone browser connected to a VPN

  29. That’s it! Congratulations on setting up your VPN.

by Ross McKillop at July 28, 2015 01:07 AM

Everything Sysadmin

LISA Conversations premieres on Tuesday!

Yes, I've started a video podcast that has a homework assignment built-in. Watch a famous talk from a past LISA conference (that's the homework) then watch Tom and Lee interview the speaker. What's new since the talk? Were their predictions validated? Come find out!

Watch it live or catch the recorded version later.

The first episode will be recorded live Tuesday July 28, 2015 at 1:30pm PDT. This month's guest will be Todd Underwood who will discuss his talk from LISA '13 titled, Post-Ops: A Non-Surgical tale of Software, Fragility, and Reliability

For links to the broadcast and other info, go here: https://www.usenix.org/conference/lisa15/lisa-conversations

July 28, 2015 12:28 AM

July 27, 2015

mypapit gnu/linux blog

OpenTracker – simplistic, lightweight PHP bittorrent tracker

I just remembered that I used to run a simplistic and lightweight php bittorrent tracker out from my webserver, the tracker slowly attracts users until it used up all the allocated web server bandwidth.

This is a pure tracker with no statistic or control panel, just drop it in the web directory, create the required mysql db connection and schema and you’re good to go. You only need to include the tracker “announce” url in the newly created bittorrent file and start seeding!

Download OpenTracker by WhitSoft Development
Download OpenTracker (Mirror) from Mypapit Personal Blog

by Mohammad Hafiz mypapit Ismail at July 27, 2015 05:34 PM

Google Webmasters

#NoHacked: How to avoid being the target of hackers

If you publish anything online, one of your top priorities should be security. Getting hacked can negatively affect your online reputation and result in loss of critical and private data. Over the past year Google has noticed a 180% increase in the number of sites getting hacked. While we are working hard to combat this hacked trend, there are steps you can take to protect your content on the web.
Today, we’ll be continuing our #NoHacked campaign. We’ll be focusing on how to protect your site from hacking and give you better insight into how some of these hacking campaigns work. You can follow along with #NoHacked on Twitter and Google+. We’ll also be wrapping up with a Google Hangout focused on security where you can ask our security experts questions.

We’re kicking off the campaign with some basic tips on how to keep your site safe on the web.

1. Strengthen your account security

Creating a password that’s difficult to guess or crack is essential to protecting your site. For example, your password might contain a mixture of letters, numbers, symbols, or be a passphrase. Password length is important. The longer your password, the harder it will be to guess. There are many resources on the web that can test how strong your password is. Testing a similar password to yours (never enter your actual password on other sites) can give you an idea of how strong your password is.

Also, it’s important to avoid reusing passwords across services. Attackers often try known username and password combinations obtained from leaked password lists or hacked services to compromise as many accounts as possible.

You should also turn on 2-Factor Authentication for accounts that offer this service. This can greatly increase your account’s security and protect you from a variety of account attacks. We’ll be talking more about the benefits of 2-Factor Authentication in two weeks.

2. Keep your site’s software updated

One of the most common ways for a hacker to compromise your site is through insecure software on your site. Be sure to periodically check your site for any outdated software, especially updates that patch security holes. If you use a web server like Apache, nginx or commercial web server software, make sure you keep your web server software patched. If you use a Content Management System (CMS) or any plug-ins or add-ons on your site, make sure to keep these tools updated with new releases. Also, sign up to the security announcement lists for your web server software and your CMS if you use one. Consider completely removing any add-ons or software that you don't need on your website -- aside from creating possible risks, they also might slow down the performance of your site.

3. Research how your hosting provider handles security issues

Your hosting provider’s policy for security and cleaning up hacked sites is in an important factor to consider when choosing a hosting provider. If you use a hosting provider, contact them to see if they offer on-demand support to clean up site-specific problems. You can also check online reviews to see if they have a track record of helping users with compromised sites clean up their hacked content.
If you control your own server or use Virtual Private Server (VPS) services, make sure that you’re prepared to handle any security issues that might arise. Server administration is very complex, and one of the core tasks of a server administrator is making sure your web server and content management software is patched and up to date. If you don't have a compelling reason to do your own server administration, you might find it well worth your while to see if your hosting provider offers a managed services option.

4. Use Google tools to stay informed of potential hacked content on your site

It’s important to have tools that can help you proactively monitor your site.The sooner you can find out about a compromise, the sooner you can work on fixing your site.

We recommend you sign up for Search Console if you haven’t already. Search Console is Google’s way of communicating with you about issues on your site including if we have detected hacked content. You can also set up Google Alerts on your site to notify you if there are any suspicious results for your site. For example, if you run a site selling pet accessories called www.example.com, you can set up an alert for [site:example.com cheap software] to alert you if any hacked content about cheap software suddenly starts appearing on your site. You can set up multiple alerts for your site for different spammy terms. If you’re unsure what spammy terms to use, you can use Google to search for common spammy terms.

We hope these tips will keep your site safe on the web. Be sure to follow our social campaigns and share any tips or tricks you might have about staying safe on the web with the #NoHacked hashtag.

If you have any additional questions, you can post in the Webmaster Help Forums where a community of webmasters can help answer your questions. You can also join our Hangout on Air about Security on August 26.

by Google Webmaster Central (noreply@blogger.com) at July 27, 2015 12:49 PM

Google Blog

Everything in its right place

When we launched Google+, we set out to help people discover, share and connect across Google like they do in real life. While we got certain things right, we made a few choices that, in hindsight, we’ve needed to rethink. So over the next few months, we’re going to be making some important changes. Here’s more about what you can expect:

A more focused Google+ experience
Google+ is quickly becoming a place where people engage around their shared interests, with the content and people who inspire them. In line with that focus, we’re continuing to add new features like Google+ Collections, where you can share and enjoy posts organized by the topics you care about. At the same time, we’ll also move some features that aren’t essential to an interest-based social experience out of Google+. For example, many elements of Google+ Photos have been moved into the new Google Photos app, and we’re well underway putting location sharing into Hangouts and other apps, where it really belongs. We think changes like these will lead to a more focused, more useful, more engaging Google+.

Using Google without a Google+ profile
People have told us that accessing all of their Google stuff with one account makes life a whole lot easier. But we’ve also heard that it doesn’t make sense for your Google+ profile to be your identity in all the other Google products you use.

So in the coming months, a Google Account will be all you’ll need to share content, communicate with contacts, create a YouTube channel and more, all across Google. YouTube will be one of the first products to make this change, and you can learn more on their blog. As always, your underlying Google Account won’t be searchable or followable, unlike public Google+ profiles. And for people who already created Google+ profiles but don’t plan to use Google+ itself, we’ll offer better options for managing and removing those public profiles.

You’ll see these changes roll out in stages over several months. While they won’t happen overnight, they’re right for Google’s users—both the people who are on Google+ every single day, and the people who aren’t.

by Google Blogs (noreply@blogger.com) at July 27, 2015 10:00 AM

Rising to the climate challenge

In less than five months, policymakers from around the world will gather in Paris to finalize a new global agreement on combating climate change. Already, many governments are putting forth ambitious emissions reduction goals. And companies are taking action, too, by reducing their own footprints and investing in clean energy.

Reaching a strong deal in Paris is an absolute and urgent necessity. The data is clear and the science is beyond dispute: a warming planet poses enormous threats to society.

Public health experts recently warned that climate change threatens to “undermine the last half century of gains in development and global health,” through forces like extreme weather, drought, malnutrition, and disease. The U.S. government has asserted that climate change poses “immediate risks to U.S. national security,” as increased natural disasters and humanitarian crises fuel instability and violence. And many studies have revealed that critical infrastructure, like electricity and water, is vulnerable to rising sea levels and intensifying storms.

Climate change is one of the most significant global challenges of our time. Rising to that challenge involves a complex mix of policy, technology, and international cooperation. This won’t be easy, but Google is committed to doing its part.

Google has been carbon neutral since 2007. Our data centers, the physical infrastructure behind web services used by billions of people, now get 3.5 times the computing power out of the same amount of electricity, as compared to five years ago. We are also the biggest corporate purchaser of renewable power on the planet. Just today at the White House, we pledged to triple those purchases over the next decade. In addition, we're a major climate-minded investor, so far committing more than $2 billion to clean energy projects, from America’s largest wind farm to Africa’s largest solar power plant.

We're serious about environmental sustainability not because it’s trendy, but because it’s core to our values and also makes good business sense. After all, the cheapest energy is the energy you don’t use in the first place. And in many places clean power is cost-competitive with conventional power.

We’re making progress, but averting catastrophic climate change will require significant investment and bold innovations. Google and our private-sector peers are ready to lead. But something fundamental is required: clear policy. The global business community needs certainty to bring climate solutions to scale. We need the world’s political leaders to confirm that investments in clean energy are sound, and that the laws and policies meant to enable such investment will be designed for the long term and rooted in what science tells us needs to be done.

It’s encouraging to see the world’s major economies set ambitious climate targets, but it’s time to get a strong international climate agreement on the books. This December in Paris, it’s imperative that policymakers reach a deal that moves us toward a zero-carbon economy. That’s the kind of future that we’re committed to helping build, and that future generations deserve.

by Google Blogs (noreply@blogger.com) at July 27, 2015 09:23 AM

July 26, 2015

Ubuntu Geek

Compare PDF Files on Ubuntu

If you want to compare PDF files you can use one of the following utility

Comparepdf

comparepdf is a command line application used to compare two PDF files.The default comparison mode is text mode where the text of each corresponding pair of pages is compared. As soon as a difference is detected the program terminates with a message (unless -v0 is set) and an indicative return code.
(...)
Read the rest of Compare PDF Files on Ubuntu (227 words)


© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at July 26, 2015 11:01 PM

mypapit gnu/linux blog

How to install NGINX with PageSpeed module in Ubuntu LTS / Debian

PageSpeed modules are open source modules developed by Google Inc that can perform website optimization to ensure faster site delivery, automatically.

PageSpeed module is not included in NGINX installation in Ubuntu or Debian. So you need to recompile NGINX together with PageSpeed module, to enable its functionality.

There are several steps to this method, first you need to get the latest nginx stable (or mainline) from PPA (optional)

#this step is optional, only if you want to get the latest Ubuntu version of nginx

sudo apt-get -y install software-properties-common

sudo -s

nginx=stable # use nginx=development for latest development version
add-apt-repository ppa:nginx/$nginx

apt-get update 

apt-get -y upgrade

Then, you’ve to install dpkg-dev, unzip utility and nginx source from apt repository

apt-get -y install dpkg-dev unzip

apt-get install nginx

apt-get source nginx

After that, you need to download PageSpeed module, this instruction is adapted from

https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source

**note replace ${NGINX_VERSION} with the version of NGINX available from apt-get, in my case – its “1.8.0”

cd
export NPS_VERSION=1.9.32.4
export NGINX_VERSION=1.8.0

wget -c https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip

unzip release-${NPS_VERSION}-beta.zip

cd ngx_pagespeed-release-${NPS_VERSION}-beta/

wget -c https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz

tar -xzvf ${NPS_VERSION}.tar.gz

cd nginx-${NGINX_VERSION}

Install all build dependencies (your configuration may varies, but i keep it within default Ubuntu configuration.

apt-get -y install libpcre3-dev libssl-dev libxslt1-dev libgd-dev libgeoip-dev geoip-bin geoip-database libpam0g-dev zlib1g-dev memcached

Then configure nginx, remember to replace ${NGINX_VERSION} with your current version of NGINX. In my case, its “1.8.0”

cd nginx-${NGINX_VERSION}

./configure  --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=debian/modules/nginx-auth-pam --add-module=debian/modules/nginx-dav-ext-module --add-module=debian/modules/nginx-echo --add-module=debian/modules/nginx-upstream-fair --add-module=debian/modules/ngx_http_substitutions_filter_module --sbin-path=/usr/local/sbin --add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta

After that, run make and make install

make

make install

The newly compiled nginx will be installed in “/usr/local/bin” without overwriting the original binary file.

Optionally you may duplicate nginx in init.d, and rename it to nginx-pagespeed, and stop the original nginx server

cp /etc/init.d/nginx /etc/init.d/nginx-pagespeed

sed -i 's|/usr/sbin/nginx|/usr/local/sbin/nginx|g' /etc/init.d/nginx-pagespeed

service nginx stop

You may also enable basic PageSpeed config in /etc/nginx/conf.d/

nano /etc/nginx/conf.d/pagespeed.conf

And add these basic PageSpeed config

#file /etc/nginx/conf.d/pagespeed.conf
        pagespeed on;
        pagespeed FetchWithGzip on;

        pagespeed FileCachePath /run/shm/pagespeed_cache;
        pagespeed RewriteLevel CoreFilters;

Save the file, and test nginx config, after that, start nginx-pagespeed service.

/usr/local/sbin/nginx -t

service nginx-pagespeed start

**Note: This instruction has been tested under Ubuntu 14.04 LTS with nginx 1.8.0 from

ppa:nginx/stable respository. The LTS is chosen because it has much longer support for server, and nginx 1.8.0 supports both spdy 3.1 and latest PageSpeed.

***Please share any thoughts or opinion or suggested correction if this guide didn’t work for you. Thanks!!

by Mohammad Hafiz mypapit Ismail at July 26, 2015 05:59 PM

How to test if PageSpeed module is running (on NGINX)

You can run a simple test using curl to verify whether the PageSpeed module is running or not on NGINX.

curl -I -X GET {ip addresss | web address}
curl -I -X GET 192.168.1.47

The output would come out something like this…
xpagespeed-test

You will see “X-Page-Speed” header with its version (in my case its “1.9.32.4-7251“)

If it DOESN’T work

There’s two possibilities:

It doesn’t work! First possibility…
There’s possibilities that you NGINX isn’t configured for PageSpeed, in that case, run:

nginx -V

You should should see a list of nginx compiled modules, if PageSpeed support compiled in, ngx_pagespeed-release-{version} should be listed.

Sample output:
nginx-ensure

If this is the case, then you SHOULD compile nginx PageSpeed module.

It doesn’t work! Second possibility…
Your did not configure PageSpeed module. To configure pagespeed, just create “/etc/nginx/conf.d/pagespeed.conf” file, and fill it with PageSpeed basic config.

#file /etc/nginx/conf.d/pagespeed.conf
        pagespeed on;
        pagespeed FetchWithGzip on;

        pagespeed FileCachePath /run/shm/pagespeed_cache;
        pagespeed RewriteLevel CoreFilters;

Save the file and restart nginx http server.

by Mohammad Hafiz mypapit Ismail at July 26, 2015 05:53 PM

Server Density

Raymii.org

Running TSS/8 on the DEC PiDP-8/i and SIMH

In this guide I'll show you how run the TSS/8 operating system on the PiDP replica by Oscar Vermeulen, and on SIMH on any other computer. I'll also cover a few basic commands like the editor, user management and system information. TSS-8 was a little time-sharing operating system released in 1968 and requires a minimum of 12K words of memory and a swapping device; on a 24K word machine, it supports up to 17 users. Each user gets a virtual 4K PDP-8; many of the utilities users ran on these virtual machines were only slightly modified versions of utilities from the Disk Monitor System or paper-tape environments. Internally, TSS-8 consists of RMON, the resident monitor, DMON, the disk monitor (file system), and KMON, the keyboard monitor (command shell). BASIC was well supported, while restricted (4K) versions of FORTRAN D and Algol were available.

July 26, 2015 12:00 AM

July 24, 2015

Trouble with tribbles

boot2docker on Tribblix

Containers are the new hype, and Docker is the Poster Child. OK, I've been running containerized workloads on Solaris with zones for over a decade, so some of the ideas behind all this are good; I'm not so sure about the implementation.

The fact that there's a lot of buzz is unmistakeable, though. So being familiar with the technology can't be a bad idea.

I'm running Tribblix, so running Docker natively is just a little tricky. (Although if you actually wanted to do that, then Triton from Joyent is how to do it.)

But there's boot2docker, which allows you to run Docker on a machine - by spinning up a copy of VirtualBox for you and getting that to actually do the work. The next thought is obvious - if you can make that work on MacOS X or Windows, why not on any other OS that also supports VirtualBox?

So, off we go. First port of call is to get VirtualBox installed on Tribblix. It's an SVR4 package, so should be easy enough. Ah, but, it has special-case handling for various Solaris releases that cause it to derail quite badly on illumos.

Turns out that Jim Klimov has a patchset to fix this. It doesn't handle Tribblix (yet), but you can take the same idea - and the same instructions - to fix it here. Unpack the SUNWvbox package from datastream to filesystem format, edit the file SUNWvbox/root/opt/VirtualBox/vboxconfig.sh, replacing the lines

             # S11 without 'pkg'?? Something's wrong... bail.
             errorprint "Solaris $HOST_OS_MAJORVERSION detected without executable $BIN_PKG !? I are confused."
             exit 1

with

         # S11 without 'pkg'?? Likely an illumos variant
         HOST_OS_MINORVERSION="152"

and follow Jim's instructions for updating the pkgmap, then just pkgadd from the filesystem image.

Next, the boot2docker cli. I'm assuming you have go installed already - on Tribblix, "zap install go" will do the trick. Then, in a convenient new directory,

env GOPATH=`pwd` go get github.com/boot2docker/boot2docker-cli

That won't quite work as is. There are a couple of patches. The first is to the file src/github.com/boot2docker/boot2docker-cli/virtualbox/hostonlynet.go. Look for the CreateHostonlyNet() function, and replace

    out, err := vbmOut("hostonlyif", "create")
    if err != nil {
        return nil, err
    }


with

    out, err := vbmOut("hostonlyif", "create")
    if err != nil {
               // default to vboxnet0
        return &HostonlyNet{Name: "vboxnet0"}, nil
    }


The point here is that , on a Solaris platform, you always get a hostonly network - that's what vboxnet0 is - so you don't need to create one, and in fact the create option doesn't even exist so it errors out.

The second little patch is that the arguments to SSH don't quite match the SunSSH that comes with illumos, so we need to remove one of the arguments. In the file src/github.com/boot2docker/boot2docker-cli/util.go, look for DefaultSSHArgs and delete the line containing IdentitiesOnly=yes (which is the option SunSSH doesn't recognize).

Then you need to rebuild the project.

env GOPATH=`pwd` go clean github.com/boot2docker/boot2docker-cli
env GOPATH=`pwd` go build github.com/boot2docker/boot2docker-cli

Then you should be able to play around. First, download the base VM image it'll run:

./boot2docker-cli download

Configure VirtualBox

./boot2docker-cli init

Start the VM

./boot2docker-cli up

Log into it

./boot2docker-cli ssh

Once in the VM you can run docker commands (I'm doing it this way at the moment, rather than running a docker client on the host). For example

docker run hello-world

Or,

docker run -d -P --name web nginx
 
Shut the VM down

./boot2docker-cli down

While this is interesting, and reasonably functional, certainly to the level of being useful for testing, a sign of the churn in the current container world is that the boot2docker cli is deprecated in favour of Docker Machine, but building that looks to be rather more involved.

by Peter Tribble (noreply@blogger.com) at July 24, 2015 10:27 PM

Google Blog

Through the Google lens: Search trends July 17-24

Anyone up for a look back at the last week on Google Search? We are! Read on to find out what the world was looking for this week.

Bad Blood
Phew, where to start with this one. Taylor Swift and Nicki Minaj had a spat over VMA nominations (Taylor was nominated for Video of the Year; Nicki was not), worked through it and made up -- all on Twitter. It was a good lesson in the art of the subtweet, as well as the “sincere apology after responding to a subtweet that wasn’t directly about you” tweet. Searches about the incident topped 500,000.

But Minaj v. Swift wasn’t the only music-related drama to make the list of Trending Searches. Meek Mill, hip hop artist and Nicki Minaj’s significant other, started a Twitter rant of his own, alleging that rapper Drake doesn’t write his own material and inspiring more than a million searches. The two artists haven’t settled up yet, so stay tuned for more on that front.

Last but not least, country stars Blake Shelton and Miranda Lambert are calling it quits after four years of marriage and guess what — they both had stuff to say about it on Twitter. More than 1 million searchers took to Google to find out more.

But wait, there’s more
It was the week of the sequel. (The weequel?) James Bond is back -- the trailer for the upcoming “Spectre” was released this week, which got more than 100,000 people searching for the movie. And nearly as surprising as the idea of a shark-filled tornado itself, the Sharknado is back. “Sharknado 3” -- featuring 90s all-stars Tara Reid and Ian Ziering -- aired on Wednesday night and pulled in a cool 500,000 searches.
CKicx4HUMAAvQ9I.png

Speaking of all-stars (and also of the 90s) remember the days when an NBA superstar could star in a wide-release film with his Looney Tunes pals? Well, it’s happening again. This time, it’s not Michael Jordan, but Lebron James who inked a deal with Warner Bros. The company announced the partnership on Wednesday, leading to 200,000 searches. Reports suggest that while Michael Jordan will be replaced, Bugs Bunny will play himself, though there has been speculation about a case of cartoon patellar tendonitis he’s been coping with quietly for years.

Posted by Megan Slack, who searched this week for [public pools in San Francisco].

by Google Blogs (noreply@blogger.com) at July 24, 2015 06:52 PM

Everything Sysadmin

Save on The Practice of Cloud System Administration

Pearson / InformIT.com is running a promotion through August 11th on many open-source related books, including Volume 2, The Practice of Cloud System Administration. Use discount code OPEN2015 during checkout and received 35% off any one book, or 45% off 2 or more books.

See the website for details.

July 24, 2015 03:28 PM

Google Blog

Five ways we’re celebrating the Special Olympics and #ADA25

"Let me win. But if I cannot win, let me be brave in the attempt." -Special Olympics Athlete Oath

Standing in Soldier Field in Chicago, 47 years ago, Eunice Shriver kicked off the first Special Olympics in history--1,000 people with intellectual disabilities from the U.S. and Canada competed in track & field, swimming and diving. Even though it was a small inaugural event, its historical impact--giving a platform to the civil rights struggles of people with disabilities that were so often overlooked-- was massive. The Games were meant to give children with cognitive disabilities, in Eunice’s words, “the chance to play, the chance to compete and the chance to grow.”

Ambitious, inclusive thinking like Eunice’s is contagious, and has inspired us to support this year’s Special Olympics World Games as part of the Google Impact Challenge: Disabilities. Launched in May, this effort is focused on supporting the development of assistive technologies for people with disabilities around the world with $20 million in Google.org grants. This weekend, to mark the Games as well as the 25th anniversary of the Americans with Disabilities Act, landmark legislation that advanced the civil rights of people with disabilities when it was signed into law in 1990, we’re honoring the community in the following ways:

Google Doodle featuring a track and athletes inspired by the Special Olympics


Google Doodle. We’ve created a homepage Doodle that shows a track inspired by the Special Olympics World Games’ "circle of inclusion,” featuring athletes of all backgrounds. In the spirit of getting moving, since we've heard from users that they love seeing doodles on the go, we're now starting to make them easier to see and share on our mobile search results in addition to desktop and the Google app.

Special Olympics World Games Los Angeles 2015 logo

Special Olympics World Games. Over the next nine days, the Special Olympics World Games will draw more than half a million spectators to cheer on 7,000 athletes from 177 countries in events from judo to powerlifting to kayaking and more. We’re powering the World Games’ social media nerve center, contributing as a financial supporter and are packing more than 300 Googlers into the stands.

Supporters hold signs to cheer on athletes

Cheer an athlete. If you’re in Los Angeles, come visit us from July 25 until August 2 at the World Games Festival Space at USC’s Alumni Park to support the athletes. For those who can’t make it in person, you can visit g.co/WorldGames2015 to send a cheer to the athletes. Every day during the competition, we’ll decorate the dorm walls of the athletes with your cheers to encourage them to “be brave in the attempt.”

Portrait installation on the stairs at the National Portrait Gallery
Portraits, like these at the National Portrait Gallery featuring leaders Judy Heumann and Ed Roberts, who have campaigned tirelessly for the rights of people with disabilities and Tatyana McFadden, who inspires athletes today, will decorate Washington, D.C. this weekend. See the photo gallery
Painting the town. In Washington D.C. and Los Angeles, we’re marking the 25th anniversary of the Americans with Disabilities Act. From men and women like Judy Heumann and Ed Roberts, who campaigned tirelessly for the rights of people with disabilities, to President George H.W. Bush, who signed the ADA into law in 1990, we’re telling the stories of 10 great leaders who have fought -- and continue to fight -- for equal rights of people living with disabilities. We’ve installed massive portraits on the stairs of historic landmarks around the nation’s capital and in L.A.’s Grand Park.


Audio description available here
Telling stories. We’re featuring the little-known history of a number of unsung heroes of the ADA movement at g.co/ADA. While people with disabilities benefit from their hard-won battles with every curb cut street corner and closed-caption film, their names are not widely known. We’d like to change that.

by Google Blogs (noreply@blogger.com) at July 24, 2015 02:34 PM

Ferry Boender

Openvaz: Creating credentials is very slow [FIXED]

When creating new credentials on Openvaz (6, 7 and 8), it takes a very long time to store the credentials.

The problem here is that the credentials are stored encrypted, and Openvaz (probably) has to generate a PGP key. This requires lots of random entropy, which is generally not abundantly available on a virtual machine. The solution is to install haveged:

sudo apt-get install haveged

Haveged will securely seed the random pool which will make a lot of random entropy available, even if you have no keyboard, mouse and soundcard attached. Ideal for VPSes.

by admin at July 24, 2015 10:50 AM

Ubuntu Geek

VirtualBox 5.0 released and ubuntu installation instructions included

VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2.
(...)
Read the rest of VirtualBox 5.0 released and ubuntu installation instructions included (1,090 words)


© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at July 24, 2015 09:20 AM

Google Webmasters

Update on the Autocomplete API

Google Search provides an autocomplete service that attempts to predict a query before a user finishes typing. For years, a number of developers have integrated the results of autocomplete within their own services using a non-official, non-published API that also had no restrictions on it. Developers who discovered the autocomplete API were then able to incorporate autocomplete services, independent of Google Search.

There have been multiple times in which the developer community’s reverse-engineering of a Google service via an unpublished API has led to great things. The Google Maps API, for example, became a formal supported API months after seeing what creative engineers could do combining map data with other data sources. We currently support more than 80 APIs that developers can use to integrate Google services and data into their applications.

However, there are some times when using an unsupported, unpublished API also carries the risk that the API will stop being be available. This is one of those situations.

We built autocomplete as a complement to Search, and never intended that it would exist disconnected from the purpose of anticipating user search queries. Over time we’ve realized that while we can conceive of uses for an autocomplete data feed outside of search results that may be valuable, overall the content of our automatic completions are optimized and intended to be used in conjunction with web search results, and outside of the context of a web search don’t provide a meaningful user benefit.

In the interest of maintaining the integrity of autocomplete as part of Search, we will be restricting unauthorized access to the unpublished autocomplete API as of August 10th, 2015. We want to ensure that users experience autocomplete as it was designed to be used -- as a service closely tied to Search. We believe this provides the best user experience for both services.

For publishers and developers who still want to use the autocomplete service for their site, we have an alternative. Google Custom Search Engine allows sites to maintain autocomplete functionality in connection with Search functionality. Any partner already using Google CSE will be unaffected by this change. For others, if you want autocomplete functionality after August 10th, 2015, please see our CSE sign-up page.


by Google Webmaster Central (noreply@blogger.com) at July 24, 2015 08:30 AM

mypapit gnu/linux blog

Check public IP address with checkip.mypapit.net

You can now check your public IP Address by visiting checkip.mypapit.net.

The website will display your public IP Address (or your anonymous proxy IP Address). As usual, this service is provided without any warranty.

Additionally, you can also check IP Address from bash command line using this curl/wget trick.

wget -qO - checkip.mypapit.net
curl -o - checkip.mypapit.net

by Mohammad Hafiz mypapit Ismail at July 24, 2015 06:31 AM

July 23, 2015

LinuxHaxor

The #1 Predictor of Success in Your Business

Over my 35 years in business, I have seen heaps of organizations develop, scale and flourish yet significantly more battle, fall flat and blur into ancient history. Over this period, I have built up a Rule that I accept is the #1 Indicator of a Company’s potential achievement. I call it my 60/30/10 Rule.

This Rule identifies with the recipe by which officials in an organization can consider guaranteeing their organization’s prosperity. To every one of you business pioneers out there, on the off chance that you need to fabricate an organization that will succeed in the commercial center, might I recommend the 60/30/10 tenet. It is basically this:

60% of the achievement or disappointment of your association is PEOPLE subordinate.

30% will identify with your item, administration or skill

10% is Dumb Luck*

This guideline, along with a decent Payroll service, is not taking into account a specific investigative study but instead on my long haul perceptions in the corporate world. At the point when an organization gets awesome representatives, the organization is perpetually fruitful. The reverse is likewise genuine – when organizations lose their most skilled colleagues, the organization loses force. Expressed another way, the immediate effect of individuals to an association’s prosperity is more essential than the organization’s item or administrations.

My great companion, Amy Rees Anderson, as of late wrote in a Forbes article:

“A standout amongst the most essential lessons I learned amid my years as a CEO was that extraordinary representatives are not replaceable. It isn’t the innovation or the item that make an organization extraordinary, it’s the individuals.”

When I consider our great development and involvement with Fusion-io, (An organization that opened up to the world on the NYSE in 2011) no doubt, 30% of our prosperity was because of the superb innovation constructed by the Founders. David Flynn and Rick White. However, until it was encompassed with individuals like Jim Dawson our VP of Sales, Steve Wozniak our Chief Scientist, Lance Smith our COO, Shawn Lindquist our General Counsel, Dennis Wolf our CFO, and even our striking Executive Assistant, Shannon Heward, together with spectacular Board individuals like Scott Sandell, Ray Bingham and Shane Robison, it would not have mattered how great the innovation was. Until you have the individuals to sparkle the best possible light on your advancements, you will simply remain a normal organization.

The reason I dove over into pursuing an organization our prosperity at Fusion-io was on the grounds that HireVue was about acquiring the right ability to any association. Despite the fact that today’s reality is fixated on incredible mechanical advancements, it is my conviction that the People component will keep on being at the heart and center of any extraordinary organization. The CEO of each organization still ought to be the Company’s CTO – not Chief Technology Officer but rather Chief Talent Officer.

A Company’s most essential resources go home consistently. Awesome pioneers will do all that they can to guarantee that their esteemed colleagues show up the following day as energetic and excited about their organization as they were on the day they touched base at work.

In the event that you take a gander at the uber triumphs in today’s universe of innovation, the best choices they ever constructed have been “Who” choices – When Apple conveyed back Steve Jobs to run the organization in 1996, that was a “Who” choice. At the point when Lou Gerstner returned and turned IBM around in the mid 90’s – that was another “Who” choice. When we enlisted Steve Wozniak, the man who reformed an era of figuring when he developed the Apple PC, as our Chief Scientist at Fusion-io, that was our “Who” choice that turned out amazingly well.

Worldwide administration power Claudio Fernandez-Araoz composed a book a year ago distributed by the Harvard Business Review Press entitled: “It’s not the How or the What but rather the Who.”

To that, I say Right On!

by fdorra.haxor at July 23, 2015 04:22 PM

Everything Sysadmin

SysAdmin Appreciation Day in New York City

If you are in NYC, there is a SysAdmin Appreciation day event at The Gingerman, 11 E 36th Ave, New York City, NY, on Friday, July 31, 2015, 6:00 PM. This event usually has a big turn-out and is a great way to meet and network with local admins.

RSVP here: http://www.meetup.com/Sysdrink/events/223896825/

Thanks to Digital Ocean for sponsoring this event, and Justin, Jay, Nathan and the other organizers for putting this together every year.

Hope to see you there!

July 23, 2015 02:28 PM

Schyntax: A DSL for specifying recurring events

There are many ways to specify scheduled items. Cron has 10 8,20 * 8 1-5 and iCalendar has RRULE and Roaring Penguin brings us REMIND. There's a new cross-platform DSL called Schyntax, created by my Stack Overflow coworker Bret Copeland.

The goal of Schyntax is to be human readable, easy to pick up, and intuitive. For example, to specify every hour from 900 UTC until 1700 UTC, one writes hours(9..17)

What if you want to run every five minutes during the week, and every half hour on weekends? Group the sub-conditions in curly braces:

{ days(mon..fri) min(*%5) } { days(sat..sun) min(*%30) }

It is case-insensitive, whitespace-insensitive, and always UTC.

You can read more examples in his blog post: Schyntax Part 1: The Language. In Schyntax Part 2: The Task Runner (aka "Schtick") you'll see how to create a scheduled task runner using the Schyntax library.

You can easily use the library from JavaScript and C#. Bret is open-sourcing it in hopes that other implementations are created. Personally I'd love to see a Go implementation.

The Schynatx syntax and sample libraries are on Github.

July 23, 2015 02:28 PM

Google Blog

Neon prescription... or rather, New transcription for Google Voice

You may have been there before...open your voicemail transcriptions in Google Voice to find that at times they aren’t completely intelligible. Or, they are humorously intelligible. Either way, they might not have been the message the caller meant to leave you.

So, we asked users if they would kindly share some of their voicemails for research and system improvements. Thanks to those who participated, we are happy to announce an improved voicemail system in Google Voice and Project Fi that delivers more accurate transcriptions. Using a (deep breath) long short-term memory deep recurrent neural network (whew!), we cut our transcription errors by 49%.

To start receiving improved voicemail transcriptions, you don't need to do a thing -- just continue to use Google Voice as you have been. For those not using Google Voice but want to give it a try, sign up for a Google Voice (or Google Voice Lite) account here, it’s quick and easy to get started.

Many thanks to the Google Voice users who shared their voicemails, they really helped us make the product better. While this is a big improvement, it is just the beginning and with your input, we will continue improving voicemail transcriptions over time. We hope you enjoy it and look forward to hearing what you link—er, think!

by Google Blogs (noreply@blogger.com) at July 23, 2015 03:00 PM

Google Webmasters

Google+: A case study on App Download Interstitials

Many mobile sites use promotional app interstitials to encourage users to download their native mobile apps. For some apps, native can provide richer user experiences, and use features of the device that are currently not easy to access on a browser. Because of this, many app owners believe that they should encourage users to install the native version of their online property or service. It’s not clear how aggressively to promote the apps, and a full page interstitial can interrupt the user from reaching their desired content.

On Google+ mobile web, we decided to take a closer look at our own use of interstitials. Internal user experience studies identified them as poor experiences, and Jennifer Gove gave a great talk at IO last year which highlights this user frustration.

Despite our intuition that we should remove the interstitial, we prefer to let data guide our decisions, so we set out to learn how the interstitial affected our users. Our analysis found that:
  • 9% of the visits to our interstitial page resulted in the ‘Get App’ button being pressed. (Note that some percentage of these users already have the app installed or may never follow through with the app store download.)
  • 69% of the visits abandoned our page. These users neither went to the app store nor continued to our mobile website.
While 9% sounds like a great CTR for any campaign, we were much more focused on the number of users who had abandoned our product due to the friction in their experience. With this data in hand, in July 2014, we decided to run an experiment and see how removing the interstitial would affect actual product usage. We added a Smart App Banner to continue promoting the native app in a less intrusive way, as recommended in the Avoid common mistakes section of our Mobile SEO Guide. The results were surprising:
  • 1-day active users on our mobile website increased by 17%.
  • G+ iOS native app installs were mostly unaffected (-2%). (We’re not reporting install numbers from Android devices since most come with Google+ installed.)
Based on these results, we decided to permanently retire the interstitial. We believe that the increase in users on our product makes this a net positive change, and we are sharing this with the hope that you will reconsider the use of promotional interstitials. Let’s remove friction and make the mobile web more useful and usable!

(Since this study, we launched a better mobile web experience that is currently without an app banner. The banner can still be seen on iOS 6 and below.)

Posted by David Morell, Software Engineer, Google+

by Google Webmaster Central (noreply@blogger.com) at July 23, 2015 08:10 AM

Raymii.org

Running Adventure on the DEC PDP-8 with SIMH

In this guide I'll show you how run the classic Colossal Cave Adventure game on a PDP-8, emulated by the SIMH emulator. The PDP-8 was an 12 bit minicomputer made in 1964 by DEC, the Digital Equipment Corporation. We will install and set up SIMH, the emulator with a RK05 diskimage running OS/8. We will use FORTRAN on OS/8 to load ADVENTURE, then we use our brain to play the game. As a bonus, I also show you how to edit files using EDIT, and show you a bit of the OS/8 system.

July 23, 2015 12:00 AM

July 22, 2015

mypapit gnu/linux blog

Get FREE O’Reilly NGINX (Preview) book NOW!

Good news, O’Reilly is offering a free preview of its latest NGINX book. Those who downloaded the book stands to receive offer to get a 90-day trial edition of NGINX-Plus which features advanced load balancing and performance monitoring suite.

Download NGINX preview book now.

free nginx preview-edition

by Mohammad Hafiz mypapit Ismail at July 22, 2015 05:06 PM

Yellow Bricks

My top 15 VMworld sessions for 2015

Advertise here with BSA


Every year I do a top VMworld sessions post. It is getting more complicated each year as there are so many great sessions. In the past years I tried to restrict myself to 20 but it always ends up being 22, 23 or even more sessions. This year I am going to be strict, 15 at most and in random order. These are the sessions I would sign up for myself, unfortunately as a VMware employee you can’t register, but I am sure going to try to sneak in when I have time, or watch the recording!

I did not include any sessions of my own, if you are interested in my sessions, look at the below:

See you guys there!

"My top 15 VMworld sessions for 2015" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

by Duncan Epping at July 22, 2015 03:34 PM

Tech Teapot

Your business model is not a software feature

I created a product a few years ago and whilst it is doing fine on new sales it is really bad at monetising the existing customer base. The reason it is doing so badly at monetising our existing customers is because I assumed that the business model could be plugged in later, like any other software feature.

I was 100% wrong.

Why didn’t I build the business model in from the start? Patience. Or rather my lack of patience.

The software took quite a while to write and I was very keen to get it out of the door as quickly as possible. I got to the stage that I was sick of the sight of the software and just wanted it finished. There is nothing wrong with wanting your project finished. But your project cannot be done if the business model isn’t baked in.

When I originally created the product, I tried to create the simplest product possible that nevertheless delivered value to the customer.

I put two things off from the first version of the software. One was the ability to notify customers when a new version of the software is available. The second was the ability to renew the software subscription after the free period had elapsed.

I thought that I could plug-in the business model at a later stage. Just like a new feature. Turns out I was wrong. Whilst I could retrofit now, an awful lot of the value has been lost. Perhaps most of the value. The vast majority of existing customers will never know about the new software and so will never upgrade.

One of the nice things about developing for the various app stores is that you don’t need to build the business model. Somebody else has done that for you.

by Jack Hughes at July 22, 2015 11:21 AM

RootPrompt

Block crackers with 3 locks to your SSH door (18 Oct 2010)

Security always requires a multi-layered scheme. SSH is a good example of this. Methods range from simple sshd configuration through the use of PAM to specify who can use SSH, to application of port-knocking techniques, or to hide the fact that SSH access even exists. Applying these techniques can make life much harder for possible intruders, who will have to go past three unusual barriers."Learn 3 ways of hardening SSH access to your system to block would-be crackers"

July 22, 2015 10:38 AM

Bazaar: source control system (15 Oct 2010)

Bazaar is used to produce the Ubuntu Linux distribution, which is an enormous software project with thousands of components. If you're using a UNIX or Linux system, chances are that your distribution offers a pre-built Bazaar package. Bazaar is flexible enough to accommodate Subversion - a centralized system and Git - a decentralized system. This article introduces you to Bazaar's many appealing features."Intro to Bazaar, a great place to keep your code"

July 22, 2015 10:38 AM

User space memory access from the Linux kernel (13 Oct 2010)

As the kernel and user space exist in different virtual address spaces, there are special considerations for moving data between them. Explore the ideas behind virtual address spaces and the kernel APIs for data movement to and from user space, and learn some of the other mapping techniques used to map memory."An introduction to Linux memory and user space APIs"

July 22, 2015 10:38 AM

Techniques for migrating Perl to Python (11 Oct 2010)

Python programmers shouldn't get too smug. While many people agree that Python is designed in a way that makes it a highly readable language, there can still be problems with legacy, untested Python code too. Porting legacy Perl to Python can be a daunting task. In this article, learn some of the theory behind dealing with legacy code, including what not to do."Techniques for migrating legacy, untested Perl to Python"

July 22, 2015 10:38 AM

New AIX 7 capabilities for virtualization (8 Oct 2010)

The IBM AIX operating system provides a highly scalable IT infrastructure for client workloads. Learn about the latest version, AIX 7.1, an open standards-based UNIX operating system, that includes significant new capabilities for virtualization, security features, availability features, and manageability."Learn about the latest version of AIX 7.1 - an open standards-based UNIX operating system"

July 22, 2015 10:38 AM

UnixDaemon

Use Ansible to Expand CloudFormation Templates

After a previous comment about " templating CloudFormation JSON from a tool higher up in your stack" I had a couple of queries about how I'm doing this. In this post I'll show a small example that explains the work flow. We're going to create a small CloudFormation template, with a single Jinja2 embedded directive, and call it from an example playbook.

This template creates an S3 bucket resource and dynamically sets the "DeletionPolicy" attribute based on a value in the playbook. We use a file extension of '.json.j2' to distinguish our pre-expanded templates from those that need no extra work. The line of interest in the template itself is "DeletionPolicy": "{{ deletion_policy }}". This is a Jinja2 directive that Ansible will interpolate and replace with a literal value from the playbook, helping us move past a CloudFormation Annoyance, Deletion Policy as a Parameter. Note that this template has no parameters, we're doing the work in Ansible itself.



    $ cat templates/deletion-policy.json.j2 

    {
      "AWSTemplateFormatVersion": "2010-09-09",

      "Description": "Retain on delete jinja2 template",

      "Resources": {

        "TestBucket": {
          "DeletionPolicy": "{{ deletion_policy }}",
          "Type": "AWS::S3::Bucket",
          "Properties": {
            "BucketName": "my-test-bucket-of-54321-semi-random-naming"
          }
        }

      }
    }


Now we move on to the playbook. The important part of the preamble is the deletion_policy variable, where we set the value for later use in the template. We then move on the the 2 essential and one house keeping task.



    $ cat playbooks/deletion-policy.playbook 
    ---
    - hosts: localhost
      connection: local
      gather_facts: False 
      vars:
        template_dir: "../templates"
        deletion_policy: "Retain" # also takes "Delete" or "Snapshot"


Because the Ansible CloudFormation module doesn't have an inbuilt option to process Jinja2 we create the stack in two stages. First we process the raw jinja JSON document and create an intermediate file. This will have the directives expanded. We then run the CloudFormation module using the newly generated file.



  tasks:
  - name: Expand the CloudFormation template for future use.
    local_action: template src={{ template_dir }}/deletion-policy.json.j2 dest={{ template_dir }}/deletion-policy.json

  - name: Create a simple stack
    cloudformation: >
      stack_name=deletion-policy
      state=present
      region="eu-west-1"
      template={{ template_dir }}/deletion-policy.json


The final task is an optional little bit of house keeping. We remove the file we generated earlier.



  - name: Clean up the local, generated, file
    file: name={{ template_dir }}/deletion-policy.json state=absent


We've only covered a simple example here but if you're willing to commit to preprocessing your templates you can add a lot of flexibility, and heavily reduce the line count, using techniques like this. Creating multiple subnets in a VPC, adding route associations and such is another good place to introduce these techniques.

July 22, 2015 10:37 AM

CloudFormation Annoyance: Deletion Policy as a Parameter

You can create some high value resources using CloudFormation that you'd like to ensure exist even after a stack has been removed. Imagine being the admin to accidently delete the wrong stack and having to watch as your RDS master, and all your prod data, slowly vanishes in to the void of AWS reclaimed volumes. Luckily AWS provides a way to reduce this risk, the DeletionPolicy Attribute. By specifying this on a resource you can ensure that if your stack is deleted then certain resources survive and function as usual. This also helps keep down the number of stacks you have in the "DELETE_FAILED" stage if you try and remove a shared security group or such.


    "Resources": {

      "TestBucket": {
        "DeletionPolicy": "Retain",
        "Type": "AWS::S3::Bucket",
        "Properties": {
          "BucketName": "MyTestBucketOf54321SemiRandomName"
        }
      }

    }


Once you start sprinkling this attribute through your templates you'll probably feel the need to have it vary in staging and prod. While it's a lovely warm feeling to have your RDS masters in prod be a little harder to accidently kill you'll want a clean tear down of any frequently created staging or developer stacks for example. The easiest way to do this is to make the DeletionPolicy take its value from a parameter, probably using code like that below.



    {
      "AWSTemplateFormatVersion": "2010-09-09",
      "Description" : "Retain on delete test template",

      "Parameters" : {

        "RetainParam": {
          "Type": "String",
          "AllowedValues": [ "Retain", "Delete", "Snapshot" ],
          "Default": "Delete"
        }

      },
      "Resources": {

        "TestBucket": {
          "DeletionPolicy": { "Ref" : "RetainParam" },
          "Type": "AWS::S3::Bucket",
          "Properties": {
            "BucketName": "MyTestBucketOf54321SemiRandomName"
          }
        }

      }
    }


Unfortunately this doesn't work. You'll get an error that looks something like cfn-validate-template: Malformed input-Template format error: Every DeletionPolicy member must be a string. if you try to validate your template (and we always do that, right?).

There are a couple of ways around this, the two I've used are: templating your CloudFormation json from a tool higher up in your stack, Ansible for example. The downside is your templates are unrunable without expansion. A second approach is to double up on some resource declarations and use CloudFormation Conditionals. You can then create the same resource, with the DeletionPolicy set to the appropriate value, based off the value of a parameter. I'm uncomfortable using this in case of resource removal on stack updates if the wrong parameters are ever passed to your stack. I prefer the first option.

Even though there are ways to work around this limitation it really feels like it's something that' Should Just Work' and as a CloudFormation user I'll be a lot happier when it does.

July 22, 2015 10:37 AM

AWS CloudFormation Parameters Tips: Size and AWS Types

While AWS CloudFormation is one of the best ways to ensure your AWS environments are reproducible it can also be a bit of an awkward beast to use. Here are a couple of simple time saving tips for refining your CFN template parameters.

The first one is also the simplest, always define at the least a MinLength property on your parameters and ideally an AllowedValues or AllowedPattern. This ensures that your stack will fail early if no value is provided. Once you start using other tools, like Ansible, to glue your stacks together it becomes very easy to create a stack parameter that has an undefined value. Without one of the above parameters CloudFormation will happily use the null and you'll either get an awkward failure later in the stack creation or a stack that doesn't quite work.

The second tip is for the parameters type property. While it's possible to use a 'type' of 'String' and an 'AllowedPattern' to ensure a value looks like an AWS resource, such as a subnet id, the addition of AWS- specific types, available from November 2014, allows you to get a lot more specific:



  # note the value of "Type"
  "Parameters" : {

    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "Default" : "i-am-the-gate-keeper" 
    }

  }


This goes one step beyond 'Allowed*' and actually verifies the resource exists in the users account. It doesn't do this at the template validation stage, which would be -really- nice, but it does it early in the stack creation so you don't have a long wait and a failed, rolled back, set of resources.



    # a parameter with a default key name that does not exist in aws
    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "MinLength": "1",
      "Default" : "non-existent-key"
    }

    # validate shows no errors
    $ aws cloudformation validate-template --template-body file://constraint-tester.json
    {
        "Description": "Test an AWS-specific type constraint", 
        "Parameters": [
            {
                "NoEcho": false, 
                "Description": "Name of an existing EC2 KeyPair", 
                "ParameterKey": "KeyName"
            }
        ], 
        "Capabilities": []
    }

    # but after we start stack creation and check the dashboard
    # CloudFormation shows an error as the second line in events
    ROLLBACK_IN_PROGRESS    AWS::CloudFormation::Stack      dsw-test-sg
    Parameter value non-existent-key for parameter name KeyName
    does not exist. . Rollback requested by user.


Neither of these tips will prevent you from making the error, or unfortunately catch them on validation. They will surface the issues much quicker on actual stack creation and make your templates more robust. Here's a list of the available AWS Specific Parameter Types, in the table under the 'Type' property and you can find more details in the 'AWS-Specific Parameter Types' section.

July 22, 2015 10:37 AM

Facter: Ansible facts in Puppet

Have you ever needed to access Ansible facts from inside Puppet? well, if you ever need to, you can use the basic ansible_facts custom fact.


    # make sure you have ansible installed
    $ sudo puppet resource package ansible ensure=present

    # clone my experimental puppet fact repo
    $ git clone https://github.com/deanwilson/unixdaemon-puppet_facts.git

    # Try running the fact
    $ ruby -I unixdaemon-puppet_facts/lib/ `which facter` ansible_facts -j
    {
      "ansible_facts": {
        "ansible_architecture": "x86_64",
        "ansible_bios_date": "04/25/2013",
        "ansible_bios_version": "RKPPT%$DSFH.86A.0457.20DF3.0425.1251",
        "ansible_cmdline": {
        ... snip ...


While it's nice to see the output in facter you need to make a small change to your config file to use them in puppet. Set stringify_facts = false in the [main] section of your puppet.conf file and you can use these new facts inside your manifests.



    $ puppet apply -e 'notify { "Form factor: ${::ansible_facts['ansible_form_factor']}": }'
    Notice: Form factor: Desktop


Would I use this in general production? No, never again, but it's a nice reminder of how easy facter is to extend. A couple of notes if you decide to play with this fact - I deliberately filter out non-ansible facts. There was something odd about seeing facter facts nested inside Ansible ones inside facter. If you foolishly decide to use this heavily, and you're running puppet frequently, adding a simple cache for the ansible results might be worth looking at to help your performance.

July 22, 2015 10:37 AM

Puppet 3.7 File Function Improvements

Puppet's always had a couple of little inconsistencies when it comes to the file and template functions. The file function has always been able to search for multiple files and return the contents of the first file found but it required absolute paths. The template function accepts module based paths but doesn't allow for matching on the first found file. Although this can be fixed with the Puppet Multiple Template Source Function.

One of the little niceties that came with Puppet 3.7 is an easily missed improvement to the file function that makes using it easier and more consistent with the template function. In earlier puppet versions you called file with absolute paths, like this:



  file { '/tmp/fakefile':
    content => file('/etc/puppet/modules/yourmodulename/files/fakefile')
  }


Thanks to a code submission from Daniel Thornton (which fixes an issue that's been logged since at least 2009) you can now call the file function in the same way as you'd use template, while retaining support for matching the first found file.



  file { '/tmp/fakefile':
    content => file('yourmodulename/fakefile')
  }

  # or

  file { '/tmp/fakefile':
    content => file("yourmodulename/fakefile.${::hostname}", 'yourmodulename/fakefile')
  }


Although most puppet releases come with a couple of 'wow' features sometimes it's the little ones like this that adds consistency to the platform and helps cleanup and abstract your modules, that you appreciate more in the long term.

July 22, 2015 10:37 AM

Standalone Sysadmin

So…containers. Why? How? What? Start here if you haven’t.

I tweeted a link today about running ceph inside of Docker, something that I would like to give a shot (mostly because I want to learn Docker more than I do, and I’ve never played with ceph, and it has a lot of interesting stuff going on):

I got to thinking about it, and realized that I haven’t written much about Docker, or even containers in general.

Containers are definitely the new hotness. Kubernetes just released 1.0 today, Docker has taken the world by storm, and here I am, still impressed by my Vagrant-fu and thinking that digital watches are a pretty neat idea. What happened? Containers? But, what about virtualization? Aren’t containers virtualization? Sort of? I mean, what is a container, anyway?

Lets start there before we get in too deep, alright?

UNIX (and by extension, Linux) has, for a long time, had a pretty cool command called ‘chroot‘. The chroot command allows you to point at an arbitrary directory and say “I want that directory to be the root (/) now”. This is useful if you had a particular process or user that you wanted to cordon off from the rest of the system, for example.
4637561241_6d77f97087_m
This is a really big advantage over virtual machines in several ways. First, it’s not very resource intensive at all. You aren’t emulating any hardware, you aren’t spinning up an entire new kernel, you’re just moving execution over to another environment (so executing another shell), plus any services that the new environment needs to have running. It’s also very fast, for the same reason. A VM may take minutes to spin up completely, but a lightweight chroot can be done in seconds.

It’s actually pretty easy to build a workable chroot environment. You just need to install all of the things that need to exist for a system to work properly. There’s a good instruction set on geek.co.il, but it’s a bit outdated (from 2010), so here’s a quick set of updated instructions.

Just go to Digital Ocean and spin up a quick CentOS 7 box (512MB is fine) and follow along:

# rpm –rebuilddb –root=/var/tmp/chroot/
# cd /var/tmp/chroot/
# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/centos-release-7-1.1503.el7.centos.2.8.x86_64.rpm
# rpm -i –root=/var/tmp/chroot –nodeps ./centos-release-7-1.1503.el7.centos.2.8.x86_64.rpm
# yum –installroot=/var/tmp/chroot install -y rpm-build yum
# cp /var/tmp/chroot/etc/skel/.??* /var/tmp/chroot/root/

At this point, there’s a pretty functional CentOS install in /var/tmp/chroot/ which you can see if you do an ‘ls’ on it.

Lets check just to make sure we are where we think we are:

# pwd
/var/tmp/chroot

Now do the magic:

# chroot /var/tmp/chroot /bin/bash -l

and voila:

# pwd
/

How much storage are we paying for this with? Not too much, all things considered:

[root@dockertesting tmp]# du -hs
436M .

Not everything is going to be functional, which you’ll quickly discover. Linux expects things like /proc/ and /sys/ to be around, and when they aren’t, it gets grumpy. The obvious problem is that if you extend /proc/ and /sys/ into the chrooted environment, you expose the outside OS to the container…probably not what you were going for.

It turns out to be very hard to secure a straight chroot. BSD fixed this in the kernel by using jails, which go a lot farther in isolating the instance, but for a long time, there wasn’t an equivalent in Linux, which made escaping from a chroot jail something of a hobby for some people. Fortunately for our purposes, new tools and techniques were developed for Linux that went a long way to fixing this. Unfortunately, a lot of the old tools that you and I grew up with aren’t future compatible and are going to have to go away.

Enter the concept of cgroups. Cgroups are in-kernel walls that can be erected to limit resources for a group of processes. They also allow for prioritization, accounting, and control of processes, and paved the way for a lot of other features that containers take advantage of, like namespaces (think cgroups, but for networks or processes themselves, so each container gets its own network interface, say, but doesn’t have the ability to spy on its neighbors running on the same machine, or where each container can have its own PID 0, which isn’t possible with old chroot environments).

You can see that with containers, there are a lot of benefits, and thanks to modern kernel architecture, we lose a lot of the drawbacks we used to have. This is the reason that containers are so hot right now. I can spin up hundreds of docker containers in the time that it takes my fastest Vagrant image to boot.

Docker. I keep saying Docker. What’s Docker?

Well, it’s one tool for managing containers. Remember all of the stuff we went through above to get the chroot environment set up? We had to make a directory, force-install an RPM that could then tell yum what OS to install, then we had to actually have yum install the OS, and then we had to set up root’s skel so that we had aliases and all of that stuff. And after all of that work, we didn’t even have a machine that did anything. Wouldn’t it be nice if there was a way to just say the equivalent of “Hey! Make me a new machine!”? Enter docker.

Docker is a tool to manage containers. It manages the images, the instances of them, and overall, does a lot for you. It’s pretty easy to get started, too. In the CentOS machine you spun up above, just run

# yum install docker -y
# service docker start

Docker will install, and then it’ll start up the docker daemon that runs in the background, keeping tabs on instances.

At this point, you should be able to run Docker. Test it by running the hello-world instance:

[root@dockertesting ~]# docker run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from docker.io/hello-world
a8219747be10: Pull complete
91c95931e552: Already exists
docker.io/hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd18681cf5daeb82aab55838d
Status: Downloaded newer image for docker.io/hello-world:latest
Usage of loopback devices is strongly discouraged for production use. Either use `–storage-opt dm.thinpooldev` or use `–storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

For more examples and ideas, visit:
http://docs.docker.com/userguide/

The message is pretty explanatory, but you can see how it understood that you asked for something that it didn’t immediately know about, searched the repository at the Docker Hub to find it, pulled it down, and ran it.

There’s a wide world of Docker out there (start with the Docker docs, for instance, then maybe read Nathan LeClaire’s blog, among others).

And Docker is just the beginning. Docker’s backend uses something called “libcontainer” (actually now or soon to be runc, of the Open Container format), but it migrated to that from LXC, another set of tools and API for manipulating the kernel to create containers. And then there’s Kubernetes, which you can get started with on about a dozen platforms pretty easily.

Just remember to shut down and destroy your Digital Ocean droplet when you’re done, and in all likelihood, you’ve spent less than a quarter to have a fast machine with lots of bandwidth to serve as a really convenient learning lab. This is why I love Digital Ocean for stuff like this!

In wrapping this up, I feel like I should add this: Don’t get overwhelmed. There’s a TON of new stuff out there, and there’s new stuff coming up constantly. Don’t feel like you have to keep up with all of it, because that’s not possible to do while maintaining the rest of your life’s balance. Pick something, and learn how it works. Don’t worry about the rest of it – once you’ve learned something, you can use that as a springboard to understanding how the next thing works. They all sort of follow the same pattern, and they’re all working toward the same goal of rapidly-available short-lived service providing instances. They use different formats, backends, and so on, but it’s not important to master all of them, and feeling overwhelmed is a normal response to a situation like this.

Just take it one step at a time and try to have fun with it. Learning should be enjoyable, so don’t let it get you down. Comment below and let me know what you think of Docker (or Kubernetes, or whatever your favorite new tech is).

by Matt Simmons at July 22, 2015 10:32 AM


Administered by Joe. Content copyright by their respective authors.