Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

July 25, 2014

Yellow Bricks

Must attend VMworld sessions 2014


Every year I do this post on the must attend VMworld sessions, and I just realized I had not done this for 2014 yet. So here it is, the list of sessions I feel are most definitely worth attending. I tend to focus on sessions which I know will have solid technical info and great presenters. Many of which over the years I have either seen presenting myself and respect very much. I tried to limit the list to 20 this year, so of course it could be that your session (or your fav session) is missing, unfortunately I cannot list all as that would defeat the purpose.

Here we go:

  1. STO3008-SPO - Decoupled Storage: Practical Examples of Leveraging Server Flash in a Virtualized Datacenter by Satyam Vaghani and Frank Denneman. What more do I need to say? Both rock stars!
  2. STO1279 - Virtual SAN Architecture Deep Dive Christian and Christos were the leads on VSAN, who can tell you better than they can??
  3. SDDC1176 - Ask the Expert vBloggers featuring Chad Sakac, Scott Lowe, William Lam, myself and moderated by Rick Scherer. This session has been a hit for the last years and will be one you cannot miss!
  4. STO2996-SPO - The vExpert Storage Game Show featuring Vaughn Steward, Cormac Hogan, Rawlinson Rivera and many others… It will be educational and entertaining for sure! Not the standard “death by powerpoint” session. If you do want “DBP”, this is not for you!
  5. STP3266 - Web-Scale Converged Infrastructure for Enterprise. Josh Odgers talking web scale for Enterprise organizations, are you still using legacy apps? Then this is a must attend.
  6. SDDC2492 - How the New Software-defined Paradigms Will Impact Your vSphere Design Forbes Guthrie and Scott Lowe talking vSphere Design, you bet you will learn something here!
  7. HBC2068 - vCloud Hybrid Service Networking Technical Deep Dive Want to know more about vCHS networking, I am sure David Hill is going to dive deep!
  8. NET2747 - VMware NSX: Software Defined Networking in the Real World Chris Wahl and Jason Nash talking networking, what is there not to like?
  9. BCO1893 - Site Recovery Manager and vCloud Automation Center: Self-service DR Protection for the Software-Defined Data Center My co-presenter Lee Dilworth for the previous 2 VMworlds, he knows what he is talking about! Co-hosting a DR session with one of the BC/DR PMs, Ben Meadowcroft. This will be good.
  10. NET1674 - Advanced Topics & Future Directions in Network Virtualization with NSX I have seen Bruce Davie present multiple times, always a pleasure and educational!
  11. STO2496 - vSphere Storage Best Practices: Next-Gen Storage Technologies Chad and Vaughn in one session… this will be good!
  12. BCO2629 - Site Recovery Manager and vSphere Replication: What’s New Technical Deep Dive Jeff Hunter and Ken Werneburg are the DR experts at VMware Tech Marketing, so worth attending for sure!
  13. HBC2638 - Ten Vital Best Practices for Effective Hybrid Cloud Security by Russel Callen and Matthew Probst… These guys are the vCHS architects, you can bet this will be useful!
  14. STO3162 - Software Defined Storage: Satisfy the Requirements of Your Application at the Granularity of a Virtual Disk with Virtual Volumes (VVols) Cormac Hogan talking VVOLs with Rochna from Nimble, this is one I would like to see!
  15. STO2480 - Software Defined Storage – The VCDX Way Part II : The Empire Strikes Back The title by itself is enough to attend this one… (Wade Holmes and Rolo Rivera)
  16. SDDC3281 - A DevOps Story: Unlocking the Power of Docker with the VMware platform and its ecosystem. You may not know these guys, but I do… Aaron and George are rock stars, and Docker seems to be the new buzzword. Find out what it is about!
  17. VAPP2979 - Advanced SQL Server on vSphere Techniques and Best Practices Scott and Jeff are the experts when it comes to virtualizing SQL, what more can I say?!
  18. STO2197 - Storage DRS: Deep Dive and Best Practices Mustafa Uysal is the lead on SDRS/SIOC, I am sure this session will contain some gems!
  19. HBC1534 - Recovery as a Service (RaaS) with vCloud Hybrid Service David Hill and Chris Colotti talking, always a pleasure to attend!
  20. MGT1876 - Troubleshooting With vCenter Operations Manager (Live Demo) Wondering why your VM is slow? Sam McBride and Praveen Kannan will show you live…
  21. INF1601 - Taking Reporting and Command Line Automation to the Next Level with PowerCLI with Alan Renouf and Luc Dekens, all I would like to know is if PowerCLI-man is going to be there or not?

As stated, some of your fav sessions may be missing… feel free to leave a suggestion so that others know which sessions they should attend.

"Must attend VMworld sessions 2014" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at July 25, 2014 07:14 PM

Everything Sysadmin

System Administrator Appreciation Day Spotlight: SuperUser.com

This being System Administrator Appreciation Day, I'd like to give a shout out to all the people of the superuser.com community.

If you aren't a system administrator, but have technical questions that you want to bring to your system administrator, this is the place for you. Do you feel bad that you keep interrupting your office IT person with questions? Maybe you should try SuperUser first. You might get your answers faster.

Whether it is a question about your home cable modem, or the mysteries of having to reboot after uninstalling software (this discussion will surprise you); this community will probably reduce the number of times each week that you interrupt your IT person.

If you are a system administrator, and like to help people, consider poking around the unanswered questions page!

Happy Sysadmin Appreciation Day!

Tom

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 04:28 PM

System Administrator Appreciation Day Spotlight: ServerFault.com

This being System Administrator Appreciation Day, I'd like to give a shout out to all the people on ServerFault.com who help system administrators find the answers to their questions. If you are patient, understanding, and looking to help fellow system administrators, this is a worthy community to join.

I like to click on the "Unanswered" button and see what questions are most in need of a little love.

Sometimes I click on the "hot this week" tab and see what has been answered recently. I always learn a lot. Today I learned:

ServerFault also has a chatroom Chatroom which is a useful place to hang out and meet the other members.

Happy Sysadmin Appreciation Day!

Tom

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 03:28 PM

I'm a system administrator one day a year.

Many years ago I was working at a company and our department's administrative assistant was very strict about not letting people call her a "secretary". She made one exception, however, on "Secretary Appreciation Day".

I'm an SRE at StackExchange.com. That's a Site Reliability Engineer. We try to hold to the ideals of what a SRE as set forth by Ben Treynor's keynote at SRECon.

My job title is "SRE". Except one day each year. Since today is System Administrator Appreciation Day, I'm definitely a system administrator... today.

Sysadmins go by many job titles. The company I work for provides Question and Answer communities on over 120 topics. Many of them are technical and appeal to the many things that are system administrators.

We also have fun sites of interest to sysadmins like Video Games and Skeptics.

All these sites are fun places to poke around and read random answers. Of course, contributing answers is fun too.

Happy System Administrator Appreciation Day!

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 01:28 PM

50% off Time Management for System Administrators

O'Reilly is running a special deal for Celebrate SysAdmin Day. For one day only, SAVE 50% on a wide range of system administration ebooks and training videos from shop.oreilly.com. If you scroll down to page 18, you'll find Time Management for System Administrators is included.

50% off is pretty astounding. Considering that the book is cheaper than most ($19.99 for the eBook) there's practically no excuse to not have a copy. Finding the time to read it... that may be another situation entirely. I hate to say "you owe it to yourself" but, seriously, if you are stressed out, overworked, and under appreciated this might be a good time to take a break and read the first chapter or two.

July 25, 2014 01:28 PM

Standalone Sysadmin

Happy SysAdmin Appreciation Day 2014!

Well, well, well, look what Friday it is!

In 1999, SysAdmin Ted Kekatos decided that, like administrative professionals, teachers, and cabbage, system administrators needed a day to recognize them, to appreciate what they do, their culture, and their impact to the business, and so he created SysAdmin Appreciation Day. And we all reap the benefit of that! Or at least, we can treat ourselves to something nice and have a really great excuse!

Speaking of, there are several things I know of going on around the world today:

As always, there are a lot of videos released on the topic. Some of the best I've seen are here:

  • A karaoke power-ballad from Spiceworks:
  • A very funny piece on users copping to some bad behavior from Sophos:
  • A heartfelt thanks from ManageEngine:
  • Imagine what life would be like without SysAdmins, by SysAid:

Also, I feel like I should mention that one of my party sponsors, Goverlan, is also giving away their software today to the first thousand people who sign up for it. It's a $900 value, so if you do Active Directory management, you should probably check that out.

Sophos was giving away socks, but it was so popular that they ran out. Not before sending me the whole set, though!

I don't even remember signing up for that, but apparently I did, because they came to my work address. Awesome!

I'm sure there are other things going on, too. Why not comment below if you know of one?

All in all, have a great day and try to find some people to get together with and hang out. Relax, and take it easy. You deserve it.

by Matt Simmons at July 25, 2014 12:52 PM

Chris Siebenmann

The OmniOS version of SSH is kind of slow for bulk transfers

If you look at the manpage and so on, it's sort of obvious that the Illumos and thus OmniOS version of SSH is rather behind the times; Sun branched from OpenSSH years ago to add some features they felt were important and it has not really been resynchronized since then. It (and before it the Solaris version) also has transfer speeds that are kind of slow due to the SSH cipher et al overhead. I tested this years ago (I believe close to the beginning of our ZFS fileservers), but today I wound up retesting it to see if anything had changed from the relatively early days of Solaris 10.

My simple tests today were on essentially identical hardware (our new fileserver hardware) running OmniOS r151010j and CentOS 7. Because I was doing loopback tests with the server itself for simplicity, I had to restrict my OmniOS tests to the ciphers that the OmniOS SSH server is configured to accept by default; at the moment that is aes128-ctr, aes192-ctr, aes256-ctr, arcfour128, arcfour256, and arcfour. Out of this list, the AES ciphers run from 42 MBytes/sec down to 32 MBytes/sec while the arcfour ciphers mostly run around 126 MBytes/sec (with hmac-md5) to 130 Mbytes/sec (with hmac-sha1).

(OmniOS unfortunately doesn't have any of the umac-* MACs that I found to be significantly faster.)

This is actually an important result because aes128-ctr is the default cipher for clients on OmniOS. In other words, the default SSH setup on OmniOS is about a third of the speed that it could be. This could be very important if you're planning to do bulk data transfers over SSH (perhaps to migrate ZFS filesystems from old fileservers to new ones)

The good news is that this is faster than 1G Ethernet; the bad news is that this is not very impressive compared to what Linux can get on the same hardware. We can make two comparisons here to show how slow OmniOS is compared to Linux. First, on Linux the best result on the OmniOS ciphers and MACs is aes128-ctr with hmac-sha1 at 180 Mbytes/sec (aes128-ctr with hmac-md5 is around 175 MBytes/sec), and even the arcfour ciphers run about 5 Mbytes/sec faster than on OmniOS. If we open this up to the more extensive set of Linux ciphers and MACs, the champion is aes128-ctr with umac-64-etm at around 335 MBytes/sec and all of the aes GCM variants come in with impressive performances of 250 Mbytes/sec and up (umac-64-etm improves things a bit here but not as much as it does for aes128-ctr).

(I believe that one reason Linux is much faster on the AES ciphers is that the version of OpenSSH that Linux uses has tuned assembly for AES and possibly uses Intel's AES instructions.)

In summary, through a combination of missing optimizations and missing ciphers and MACs, OmniOS's normal version of OpenSSH is leaving more than half the performance it could be getting on the table.

(The 'good' news for us is that we are doing all transfers from our old fileservers over 1G Ethernet, so OmniOS's ssh speeds are not going to be the limiting factor. The bad news is that our old fileservers have significantly slower CPUs and as a result max out at about 55 Mbytes/sec with arcfour (and interestingly, hmac-md5 is better than hmac-sha1 on them).)

PS: If I thought that network performance was more of a limit than disk performance for our ZFS transfers from old fileservers to the new ones, I would investigate shuffling the data across the network without using SSH. I currently haven't seen any sign that this is the case; our 'zfs send | zfs recv' runs have all been slower than this. Still, it's an option that I may experiment with (and who knows, a slow network transfer may have been having knock-on effects).

by cks at July 25, 2014 05:36 AM

Everything Sysadmin

Special deal for Sysadmin Appreciation Day

Today is Sysadmin Appreciation Day! We appreciate all of you! The Practice of System and Network Administration is today's InformIT eBook Deal of the Day. Click on the link to get a special discount.

July 25, 2014 05:28 AM

July 24, 2014

Ubuntu Geek

Yellow Bricks

Software Defined Storage, which phase are you in?!


Working within R&D at VMware means you typically work with technology which is 1 – 2 years out, and discuss futures of products which are 2-3 years. Especially in the storage  space a lot has changed. Not just innovations within the hypervisor by VMware like Storage DRS, Storage IO Control, VMFS-5, VM Storage Policies (SPBM), vSphere Flash Read Cache, Virtual SAN etc. But also by partners who do software based solutions like PernixData (FVP), Atlantis (ILIO) and SANDisk FlashSoft. Of course there is the whole Server SAN / Hyper-converged movement with Nutanix, Scale-IO, Pivot3, SimpliVity and others. Then there is the whole slew of new storage systems some which are scale out and all-flash, others which focus more on simplicity, here we are talking about Nimble, Tintri, Pure Storage, Xtreme-IO, Coho Data, Solid Fire and many many more.

Looking at it from my perspective, I would say there are multiple phases when it comes to the SDS journey:

  • Phase 0 – Legacy storage with NFS / VMFS
  • Phase 1 – Legacy storage with NFS / VMFS + Storage IO Control and Storage DRS
  • Phase 2 – Hybrid solutions (Legacy storage + acceleration solutions or hybrid storage)
  • Phase 3 – Object granular policy driven (scale out) storage

I have written about Software Defined Storage multiple times in the last couple of years, have worked with various solutions which are considered to be “Software Defined Storage”. I have a certain view of what the world looks like. However, when I talk to some of our customers reality is different, some seem very happy with what they have in Phase 0. Although all of the above is the way of the future, and for some may be reality today, I do realise that Phase 1, 2 and 3 may be far away for many. I would like to invite all of you to share:

  1. Which phase you are in, and where you would like to go to?
  2. What you are struggling with most today that is driving you to look at new solutions?

"Software Defined Storage, which phase are you in?!" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at July 24, 2014 03:30 PM

Tech Teapot

Software the old fashioned way

I was clearing out my old bedroom after many years nagging by my parents when I came across my two of my old floppy disk boxes. Contained within is a small snapshot of my personal computing starting from 1990 through until late 1992. Everything before and after those dates doesn’t survive I’m afraid.

XLISP Source code disk CLIPS expert system sources disk Little Smalltalk interpreter disk Micro EMACS disks

The archive contains loads of backups of work I produced, now stored on Github, as well as public domain / shareware software, magazine cover disks and commercial software I purchased. Yes, people used to actually buy software. With real money. A PC game back in the late 1980s cost around £50 in 1980s money. According to this historic inflation calculator, that would be £117 now. Pretty close to a week’s salary for me at the time.

One of my better discoveries from the late 1980s was public domain and shareware software libraries. Back then there were a number of libraries, usually advertised in the small ads at the back of computer magazines.

This is a run down of how you’d use your typical software library:

  1. Find an advert from a suitable library and write them a nice little letter requesting they send you a catalog. Include payment as necessary;
  2. Wait for a week or two;
  3. Receive a small, photocopied catalog with lists of floppies and a brief description of the contents;
  4. Send the order form back to the library with payment, usually by cheque;
  5. Wait for another week or two;
  6. Receive  a small padded envelope through the post with my selection of floppies;
  7. Explore and enjoy!

If you received your order in two weeks you were doing well. After the first order, when you have your catalog to hand, you could get your order in around a week. A week was pretty quick for pretty well anything back then.

The libraries were run as small home businesses. They were the perfect second income. Everything was done by mail, all you had to do was send catalogs when requested and process orders.

One of the really nice things about shareware libraries was that you never really knew what you were going to get. Whilst you’d have an idea of what was on the disk from the description in the catalog, they’d be a lot of programs that were not described. Getting a new delivery was like a mini MS-DOS based text adventure, discovering all of the neat things on the disks.

The libraries contained lots of different things, mostly shareware applications of every kind you can think of. The most interesting to me as an aspiring programmer was the array of public domain software. Public domain software was distributed with the source code. There is no better learning tool when programming than reading other peoples’ code. The best code I’ve ever read was the CLIPS sources for a forward chaining expert system shell written by NASA.

Happy days :)

PS All of the floppies I’ve tried so far still work :) Not bad after 23 years.

The post Software the old fashioned way appeared first on Openxtra Tech Teapot.

by Jack Hughes at July 24, 2014 07:00 AM

Aaron Johnson

Links: 7-23-2014

  • Microsoft’s New CEO Needs An Editor | Monday Note
    Loved the hierarchy of ideas of graph, quote: "The top layer deals with the Identity or Culture — I use the two terms interchangeably as one determines the other. One level down, we have Goals, where the group is going. Then come the Strategies or the paths to those goals. Finally, we have the Plan, the deployment of troops, time, and money."
    (categories: strategy business communication writing motivation culture microsoft )

  • Hierarchy of ideas | Chunking | NLP
    More on the hierarchy of ideas, this time as it relates to conflict, quote: "In NLP we learn to use this hierarchy of ideas and chunking to assist others in overcoming their problems, we use it to improve our communication skills (so we understand how others are thinking and how we are creating our own problems). We use it to discover the deep structure behind peoples thinking and the words that they are using."
    (categories: chunking ideas communication strategy conflict negotiation )

by ajohnson at July 24, 2014 06:30 AM

Chris Siebenmann

What influences SSH's bulk transfer speeds

A number of years ago I wrote How fast various ssh ciphers are because I was curious about just how fast you could do bulk SSH transfers and how to get them to go fast under various circumstances. Since then I have learned somewhat more about SSH speed and what controls what things you have available and can get.

To start with, my years ago entry was naively incomplete because SSH encryption has two components: it has both a cipher and a cryptographic hash used as the MAC. The choice of both of them can matter, especially if you're willing to deliberately weaken the MAC. As an example of how much of an impact this might make, in my testing on a Linux machine I could almost double SSH bandwidth by switching from the default MAC to 'umac-64-etm@openssh.com'.

(At the same time, no other MAC choice made much of a difference within a particular cipher, although hmac-sha1 was sometimes a bit faster than hmac-md5.)

Clients set the cipher list with -c and the MAC with -m, or with the Ciphers and MACs options in your SSH configuration file (either a personal one or a global one). However, what the client wants to use has to be both supported by the server and accepted by it; this is set in the server's Ciphers and MACs configuration options. The manpages for ssh_config and sshd_config on your system will hopefully document both what your system supports at all and what it's set to accept by default. Note that this is not necessarily the same thing; I've seen systems where sshd knows about ciphers that it will not accept by default.

(Some modern versions of OpenSSH also report this information through 'ssh -Q <option>'; see the ssh manpage for details. Note that such lists are not necessarily reported in preference order.)

At least some SSH clients will tell you what the server's list of acceptable ciphers (and MACs) if you tell the client to use options that the server doesn't support. If you wanted to, I suspect that you could write a program in some language with SSH protocol libraries that dumped all of this information for you for an arbitrary server (without the fuss of having to find a cipher and MAC that your client knew about but your server didn't accept).

Running 'ssh -v' will report the negotiated cipher and MAC that are being used for the connection. Technically there are two sets of them, one for the client to server and one for the server back to the client, but I believe that under all but really exceptional situations you'll use the same cipher and MAC in both directions.

Different Unix OSes may differ significantly in their support for both ciphers and MACs. In particular Solaris effectively forked a relatively old version of OpenSSH and so modern versions of Illumos (and Illumos distributions such as OmniOS) do not offer you anywhere near a modern list of choices here. How recent your distribution is will also matter; our Ubuntu 14.04 machines naturally offer us a lot more choice than our Ubuntu 10.04 ones.

PS: helpfully the latest OpenSSH manpages are online (cf), so the current manpage for ssh_config will tell you the latest set of ciphers and MACs supported by the official OpenSSH and also show the current preference order. To my interest it appears that OpenSSH now defaults to the very fast umac-64-etm MAC.

by cks at July 24, 2014 03:22 AM

July 23, 2014

mikas blog

Book Review: The Docker Book

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. :) (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

by mika at July 23, 2014 08:16 PM

UnixDaemon

Ansible AWS Lookup Plugins

Once we started linking multiple CloudFormation stacks together with Ansible we started to feel the need to query Amazon Web Services for both the output values from existing CloudFormation stacks and certain other values, such as security group IDs and Elasticache Replication Group Endpoints. We found that the quickest and easiest way to gather this information was with a handful of Ansible Lookup Plugins.

I've put the code for the more generic Ansible AWS Lookup Plugins on github and even if you're an Ansible user who's not using AWS they are worth a look just to see how easy it is to write one.

In order to use these lookup plugins you'll want to configure both your default AWS credentials and, unless you want to keep the plugins alongside your playbooks, your lookup plugins path in your Ansible config.

First we configure the credentials for boto, the underlying AWS library used by Ansible.



cat ~/.aws/credentials
[default]
aws_access_key_id = 
aws_secret_access_key =


Then we can tell ansible where to find the plugins themselves.



cat ~/.ansible.cfg

[defaults]
...
lookup_plugins = /path/to/git/checkout/cloudformations/ansible-plugins/lookup_plugins


And lastly we can test that everything is working correctly



$ cat region-test.playbook 
---
- hosts: localhost
  connection: local
  gather_facts: False

  tasks:
  - shell: echo region is =={{ item }}==
    with_items: lookup('aws_regions').split(',')

# and then run the playbook
$ ansible-playbook -i hosts region-test.playbook

Now you've seen how easy it is, go write your own!

July 23, 2014 06:28 PM

Tech Teapot

Early 1990s Software Development Tools for Microsoft Windows

The early 1990s were an interesting time for software developers. Many of the tools that are taken for granted today made their debut for a mass market audience.

I don’t mean that the tools were not available previously. Both Smalltalk  and LISP sported what would today be considered modern development environments all the way back in the 1970s, but hardware requirements put the tools well beyond the means of regular joe programmers. Not too many people had workstations at home or in the office for that matter.

I spent the early 1990s giving most of my money to software development tool companies of one flavour or another.

Actor v4.0 Floppy Disks Whitewater Object Graphics Floppy Disk MKS LEX / YACC Floppy Disks Turbo Pascal for Windows Floppy Disks

Actor was a combination of object oriented language and programming environment for very early versions of Microsoft Windows. There is a review in Info World magazine of Actor version 3 that makes interesting reading. It was somewhat similar to Smalltalk, but rather more practical for building distributable programs. Unlike Smalltalk, it was not cross platform but on the plus side, programs did look like native Windows programs. It was very much ahead of its time in terms of both the language and the programming environment and ran on pretty modest hardware.

I gave Borland quite a lot of money too. I bought Turbo Pascal for Windows when it was released, having bought regular old Turbo Pascal v6 for DOS a year or so earlier. The floppy disks don’t have a version number on so I have no idea which version it is. Turbo Pascal for Windows eventually morphed in Delphi.

I bought Microsoft C version 6 introducing as it did a DOS based IDE, it was still very much an old school C compiler. If you wanted to create Windows software you needed to buy the Microsoft Windows SDK at considerable extra cost.

Asymetrix Toolbook was marketed in the early 1990s as a generic Microsoft Windows development tool. There are old Info World reviews here and here. Asymetrix later moved the product to be a learning authorship tool. I rather liked the tool, though it didn’t really have the performance and flexibility I was looking for. Distributing your finished work was also not a strong point.

Microsoft Quick C for Windows version 1.0 was released in late 1991. Quick C bundled a C compiler with the Windows SDK so that you could build 16 bit Windows software. It also sported an integrated C text editor, resource editor  and debugger.

The first version of Visual Basic was released in 1991. I am not sure why I didn’t buy it, I imagine there was some programming language snobbery on my part. I know there are plenty of programmers of a certain age who go all glassy eyed at the mere thought of BASIC, but I’m not one of them. Visual Basic also had an integrated editor and debugger.

Both Quick C and Visual Basic are the immediate predecesors of the Visual Studio product of today.

The post Early 1990s Software Development Tools for Microsoft Windows appeared first on Openxtra Tech Teapot.

by Jack Hughes at July 23, 2014 10:59 AM

Chris Siebenmann

One of SELinux's important limits

People occasionally push SELinux as the cure for security problems and look down on people who routinely disable it (as we do). I have some previously expressed views on this general attitude, but what I feel like pointing out today is that SELinux's security has some important intrinsic limits. One big one is that SELinux only acts at process boundaries.

By its nature, SELinux exists to stop a process (or a collection of them) from doing 'bad things' to the rest of the system and to the outside environment. But there are any number of dangerous exploits that do not cross a process's boundaries this way; the most infamous recent one is Heartbleed. SELinux can do nothing to stop these exploits because they happen entirely inside the process, in spheres fully outside its domain. SELinux can only act if the exploit seeks to exfiltrate data (or influence the outside world) through some new channel that the process does not normally use, and in many cases the exploit doesn't need to do that (and often doesn't bother).

Or in short, SELinux cannot stop your web server or your web browser from getting compromised, only from doing new stuff afterwards. Sending all of the secrets that your browser or server already has access to to someone in the outside world? There's nothing SELinux can do about that (assuming that the attacker is competent). This is a large and damaging territory that SELinux doesn't help with.

(Yes, yes, privilege separation. There are a number of ways in which this is the mathematical security answer instead of the real one, including that most network related programs today are not privilege separated. Chrome exploits also have demonstrated that privilege separation is very hard to make leak-proof.)

by cks at July 23, 2014 04:25 AM

July 22, 2014

Ubuntu Geek

Install webmin on ubuntu 14.04 (Trusty Tahr) Server

Webmin is a web-based interface for system administration for Unix. Using any modern web browser, you can setup user accounts, Apache, DNS, file sharing and much more. Webmin removes the need to manually edit Unix configuration files like /etc/passwd, and lets you manage a system from the console or remotely.
(...)
Read the rest of Install webmin on ubuntu 14.04 (Trusty Tahr) Server (141 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Related posts

by ruchi at July 22, 2014 11:34 PM

Everything Sysadmin

Tom will be the October speaker at Philly Linux Users Group (central)

I've just been booked to speak at PLUG/Central in October. I'll be speaking about our newest book, The Practice of Cloud System Administration.

For a list of all upcoming speaking engagements, visit our appearances page: http://everythingsysadmin.com/appearances/

July 22, 2014 07:30 PM

Milek

Massive msync() speed up on ZFS

MongoDB is using mmap() to access all its files. It also has a special thread which by default wakes up every 60s and calls msync() for all the mmap'ed files, one at a time. Initially when you start MongoDB all these msync's are fast, assuming there are no modifications. However if your server has hundreds of GBs of RAM (or more) and a database is also that big, all these msync()s are getting slower over time - they take longer the more data is cached in RAM. Eventually it can take even 50s or more for the thread to finish syncing all of the files even if there is nothing to write to disk. The actual problem is that while the thread is syncing all the files it holds a global lock and until it finishes the database is almost useless. If it takes 50s to sync all the files then the database can process requests only for 10s out of each 60s window...

If you have logging enabled in MongoDB you should see log entries like:

    Tue Apr 29 06:22:02.901 [DataFileSync] flushing mmaps took 33302ms for 376 files

On Linux this is much faster as it has a special optimization for such a case, which Solaris doesn't.
However Oracle fixed the bug sometime ago and now the same database reports:

    Tue Apr 29 12:55:51.009 [DataFileSync] flushing mmaps took 9ms for 378 files

This is over 3000x improvement!
 
The Solaris Bug ID is: 18658199 Speed up msync() on ZFS by 90000x with this one weird trick
which is fixed in Solaris 11.1 SRU21 and also Solaris 11.2

Note that the fix only improves a case when an entire file is msync'ed and the underlying file system is ZFS. Any application which has a similar behavior would benefit. For some large mappings, like 1TB, the improvement can even be 178,000x.


by milek (noreply@blogger.com) at July 22, 2014 03:38 PM

Standalone Sysadmin

Goverlan press release on the Boston SysAdmin Day party!

Thanks again to the folks over at Goverlan for supporting my SysAdmin Appreciation Day party here in Boston! They're so excited, they issued a Press Release on it!

Goverlan Press Release

Click to read more

I'm really looking forward to seeing everyone on Friday. We've got around 60 people registered, and I imagine we'll have more soon. Come have free food and drinks. Register now!

by Matt Simmons at July 22, 2014 02:12 PM

Google Blog

Little Box Challenge opens for submissions

These days, if you’re an engineer, inventor or just a tinkerer with a garage, you don’t have to look far for a juicy opportunity: there are cash prize challenges dedicated to landing on the moon, building a self-driving car, cleaning the oceans, or inventing an extra-clever robot. Today, together with the IEEE, we’re adding one more: shrinking a big box into a little box.

Seriously.

Of course, there’s more to it than that. Especially when the big box is a power inverter, a picnic cooler-sized device used to convert the energy that comes from solar, electric vehicles & wind (DC power) into something you can use in your home (AC power). We want to shrink it down to the size of a small laptop, roughly 1/10th of its current size. Put a little more technically, we’re looking for someone to build a kW-scale inverter with a power density greater than 50W per cubic inch. Do it best and we’ll give you a million bucks.
There will be obstacles to overcome (like the conventional wisdom of engineering). But whoever gets it done will help change the future of electricity. A smaller inverter could help create low-cost microgrids in remote parts of the world. Or allow you to keep the lights on during a blackout via your electric car’s battery. Or enable advances we haven’t even thought of yet.

Either way, we think it’s time to shine a light on the humble inverter, and the potential that lies in making it much, much smaller. Enter at littleboxchallenge.com—we want to know how small you can go.

by Emily Wood (noreply@blogger.com) at July 22, 2014 08:00 AM

Chris Siebenmann

What I know about the different types of SSH keys (and some opinions)

Modern versions of SSH support up to four different types of SSH keys (both for host keys to identify servers and for personal keys): RSA, DSA, ECDSA, and as of OpenSSH 6.5 we have ED25519 keys as well. Both ECDSA and ED25519 uses elliptic curve cryptography, DSA uses finite fields, and RSA is based on integer factorization. EC cryptography is said to have a number of advantages, particularly in that it uses smaller key sizes (and thus needs smaller exchanges on the wire to pass public keys back and forth).

(One discussion of this is this cloudflare blog post.)

RSA and DSA keys are supported by all SSH implementations (well, all SSH v2 implementations which is in practice 'all implementations' these days). ECDSA keys are supported primarily by reasonably recent versions of OpenSSH (from OpenSSH 5.7 onwards); they may not be in other versions, such as the SSH that you find on Solaris and OmniOS or on a Red Hat Enterprise 5 machine. ED25519 is only supported in OpenSSH 6.5 and later, which right now is very recent; of our main machines, only the Ubuntu 14.04 ones have it (especially note that it's not supported by the RHEL 7/CentOS 7 version of OpenSSH).

(I think ED25519 is also supported on Debian test (but not stable) and on up to date current FreeBSD and OpenBSD versions.)

SSH servers can offer multiple host keys in different key types (this is controlled by what HostKey files you have configured). The order that OpenSSH clients will try host keys in is controlled by two things: the setting of HostKeyAlgorithms (see 'man ssh_config' for the default) and what host keys are already known for the target host. If no host keys are known, I believe that the current order is order is ECDSA, ED25519, RSA, and then DSA; once there are known keys, they're tried first. What this really means is that for an otherwise unknown host you will be prompted to save the first of these key types that the host has and thereafter the host will be verified against it. If you already know an 'inferior' key (eg a RSA key when the host also advertises an ECDSA key), you will verify the host against the key you know and, as far as I can tell, not even save its 'better' key in .ssh/known_hosts.

(If you have a mixture of SSH client versions, people can wind up with a real mixture of your server key types in their known_hosts files or equivalent. This may mean that you need to preserve and restore multiple types of SSH host keys over server reinstalls, and probably add saving and restoring ED25519 keys when you start adding Ubuntu 14.04 servers to your mix.)

In terms of which key type is 'better', some people distrust ECDSA because the elliptic curve parameters are magic numbers from NIST and so could have secret backdoors, as appears to be all but certain for another NIST elliptic curve based cryptography standard (see also and also and more). I reflexively dislike both DSA and ECDSA because DSA implementation mistakes can be explosively fatal, as in 'trivially disclose your private keys'. While ED25519 also uses DSA it takes specific steps to avoid at least some of the explosive failures of plain DSA and ECDSA, failures that have led to eg the compromise of Sony's Playstation 3 signing keys.

(RFC 6979 discusses how to avoid this particular problem for DSA and ECDSA but it's not clear to me if OpenSSH implements it. I would assume not until explicitly stated otherwise.)

As a result of all of this I believe that the conservative choice is to advertise and use only RSA keys (both host keys and personal keys) with good bit sizes. The slightly daring choice is to use ED25519 when you have it available. I would not use either ECDSA or DSA although I wouldn't go out of my way to disable server ECDSA or DSA host keys except in a very high security environment.

(I call ED25519 'slightly daring' only because I don't believe it's undergone as much outside scrutiny as RSA, and I could be wrong about that. See here and here for a discussion of ECC and ED25519 security properties and general security issues. ED25519 is part of Dan Bernstein's work and in general he has a pretty good reputation on these issues. Certainly the OpenSSH people were willing to adopt it.)

PS: If you want to have your eyebrows raised about the choice of elliptic curve parameters, see here.

PPS: I don't know what types of keys non-Unix SSH clients support over and above basic RSA and DSA support. Some casual Internet searches suggest that PuTTY doesn't support ECDSA yet, for example. And even some Unix software may have difficulties; for example, currently GNOME Keyring apparently doesn't support ECDSA keys (via archlinux).

by cks at July 22, 2014 03:13 AM

July 21, 2014

Chris Siebenmann

The CBL has a real false positive problem

As I write this, a number of IP addresses in 128.100.1.0/24 are listed in the CBL, and various of them have been listed for some time. There is a problem with this: these CBL-listed IP addresses don't exist. I don't mean 'they aren't supposed to exist'; I mean 'they could only theoretically exist on a secure subnet in our machine room and even if they did exist our firewall wouldn't allow them to pass traffic'. So these IP addresses don't exist in a very strong sense. Yet the CBL lists them and has for some time.

The first false positive problem the CBL has is that they are listing this traffic at all. We have corresponded with the CBL about this and these listings (along with listings on other of our subnets) all come from traffic observed at a single one of their monitoring points. Unlike what I assumed in the past, these observations are not coming from parsing Received: headers but from real TCP traffic. However they are not connections from our network, and the university is the legitimate owner and router of 128.100/16. A CBL observation point that is using false routing (and is clearly using false routing over a significant period of time) is an actively dangerous thing; as we can see here, false routing can cause the CBL to list anything.

The second false positive problem the CBL has is that, as mentioned, we have corresponded with the CBL over this. In that correspondence the CBL spokesperson agreed that the CBL was incorrect in this listing and would get it fixed. That was a couple of months ago, yet a revolving cast of 128.100.1.0/24 IP addresses still gets listed and relisted in the CBL. As a corollary of this, we can be confident that the CBL listening point(s) involved are still using false routes for some of their traffic. You can apply charitable or less charitable assumptions for this lack of actual action on the CBL's part; at a minimum it is clear that some acknowledged false positive problems go unfixed for whatever reason.

I don't particularly have a better option than the CBL these days. But I no longer trust it anywhere near as much as I used to and I don't particularly like its conduct here.

(And I feel like saying something about it so that other people can know and make their own decisions. And yes, the situation irritates me.)

(As mentioned, we've seen similar issues in the past, cf my original 2012 entry on the issue. This time around we've seen it on significantly more IP addresses, we have extremely strong confidence that it is a false positive problem, and most of all we've corresponded with the CBL people about it.)

by cks at July 21, 2014 03:04 AM

Rands in Repose

Hacking on Mtrek

Mtrek is a real-time multiplayer space combat game loosely set in the Star Trek Universe. Sounds pretty sweet, right? Check out a screen shot.

mtrek

OoooOooh yeaaaaaaaah.

Designed and written by Tim Wisseman and Chuck L. Peterson in the late 80s at University of California, Santa Cruz, Mtrek is completely text-based. To understand where an enemy ship was, you had to visualize the direction via the onscreen data. If this wasn’t enough mental load, it was absolutely required to develop a set of macros on top of the game’s byzantine keyboard commands in order to master a particular ship. Furthermore, if you weren’t intimately familiar with the performance characteristics of your particular ship, you’d get quickly clobbered.

Mtrek was brutally unforgiving to new players. It was a pain in the ass to master, and constantly playing Mtrek was the primary reason I almost didn’t make it through my freshman year at UCSC, but that is not why it’s important to me.

The Mtrek Appeal

Mtrek’s painful learning curve was part of its appeal for me. It was a badge of honor to be able to sit at a Unix terminal, stare at a bunch of numbers, and crush your enemies. Mtrek was also a tantalizing preview of the connected nature of the the Internet. Remember, this was the late 80s and we were years away from the arrival of the consumable Internet afforded us by web browsers. Most important to me, Mtrek arrived when I was early in my computer science career, so it wasn’t just a way to avoid studying; it was a machine I desperately wanted to understand.

After months of playing, I learned that one of the the game’s creators, Chuck L. Peterson (“clp”) was a frequent player. After one particularly successful evening with my Romulan Bird of Prey, I mailed clp and asked if there was anything, however small, I could do to help with the game. Without as much a signal question to vet my qualifications, he gave me a project.

In Mtrek, there are a handful of bots that perform various utility and housekeeping tasks within the game. One of these bots the THX-1138 ran around the quadrants, and he wanted to alter its behavior. He sent me the source, gave me the barest of explanations of the code, and sent me on my way.

Let’s talk briefly about my programming qualifications at the time. My experience had recently peaked at an elegant snippet of Pascal that did an ok job of counting words from a file. The THX bot was written in C and it ran against a real-time game engine which was a programming construct far beyond my current understanding. I was terrified, so I hacked.

Hack Spectrum

MIT defines a hack thusly:

The word hack at MIT usually refers to a clever, benign, and “ethical” prank or practical joke, which is both challenging for the perpetrators and amusing to the MIT community (and sometimes even the rest of the world!) Note that this has nothing to do with computer (or phone) hacking, which we call “cracking”.

This was not the hacking I was doing late on a Thursday night while I avoided studying for my History of Consciousness final. My version of hacking at the time was, “Oh shit, how am I going to write this code in a language I don’t know against a codebase I don’t understand quickly enough that this guy who I respect doesn’t think I’m an idiot?”

The experience involved a set of tasks that I’ve become intimately familiar with over the years. To solve this particular problem, I had to figure out how to comprehend someone’s mostly undocumented code via a series of investigative experiments that started small (“Does it compile?”) and grew larger (“What happens when I change this function?) as I gained confidence.

It was a technical mystery and my job was to unpack and understand the mystery in whatever way I could in a pre-Google world. I searched newsgroups, I read man pages, and when I was stuck and felt I had a credibly hard question, I sent clp a well-researched email to which he’d quickly and briefly respond with delicious unblocking clarity.

Two weeks later, I still didn’t know C, and had only the barest understanding of how the Mtrek game worked, but the THX bot was acting how clp had expected. I shipped. It was a hack, it was an inelegant but effective solution to a computing problem and I had performed it.

An Unknown Someone

I’m not sure why I started thinking about Mtrek earlier this week. I think I was reflecting on the spartan interface. I searched for the game’s name and discovered the original game was no longer online, but a Java equivalent had been written and was alive and kicking. Sadly, I also discovered that clp had passed away in 2012.

I never met the guy, but as I wandered the Mtrek website reminiscing, I discovered he took the time to acknowledge my small, inept addition to the Mtrek universe. But far more importantly, he took a chance on an unknown someone he didn’t know nor would ever meet.

He doesn’t know the impact of a small decision he made many years ago, or that the result of that decision allowed me to not only produce the first piece code that I felt was mine not because I wrote it, but because I learned how to write it. The experience of writing this snippet of forgotten code was my first glimpse into the essential lessons of learning. The end result of clp’s small decision gave me timely and essential confidence to become a software engineer.

Thank you, clp.

by rands at July 21, 2014 02:48 AM

July 20, 2014

Ubuntu Geek

Step By Step Ubuntu 14.04 LTS (Trusty Tahr) LAMP Server Setup


In around 15 minutes, the time it takes to install Ubuntu Server Edition, you can have a LAMP (Linux, Apache, MySQL and PHP) server up and ready to go. This feature, exclusive to Ubuntu Server Edition, is available at the time of installation.The LAMP option means you don’t have to install and integrate each of the four separate LAMP components, a process which can take hours and requires someone who is skilled in the installation and configuration of the individual applications. Instead, you get increased security, reduced time-to-install, and reduced risk of misconfiguration, all of which results in a lower cost of ownership.Currently this installation provide PostgreSQL database, Mail Server, Open SSH Server,Samba File Server, Print Server, Tomcat Java Server,Virtual Machine Host,Manual Package selection,LAMP and DNS options for pre-configured installations, easing the deployment of common server configurations.
(...)
Read the rest of Step By Step Ubuntu 14.04 LTS (Trusty Tahr) LAMP Server Setup (652 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | 2 comments | Add to del.icio.us
Post tags: , , , ,

Related posts

by ruchi at July 20, 2014 11:45 PM

Server Density

Evaggelos Balaskas

apache Redirect permanent your web app to https

This is pretty simple to even document, but i need a reference point !


<VirtualHost 1.2.3.4:80>

        ServerName example.com
        Redirect permanent / https://example.com

</VirtualHost>

dont forget to create the https virtual host, something like that:

<VirtualHost 1.2.3.4:443>

        ServerName example.com

        ServerAdmin admin@example.com

        # Logs
        CustomLog logs/example.com.access.log combined
        ErrorLog  logs/example.com.error.log

        DocumentRoot /www/examplecom
        DirectoryIndex index.html

        <Directory "/www/examplecom">
                Order allow,deny
                Allow from all 

                AllowOverride All 

                AuthType basic
                AuthName "Enter At Your Own Risk"
                AuthUserFile /www/htpasswd_for_examplecom
                Require valid-user

        </Directory>

        # HSTS 
        Header always set Strict-Transport-Security "max-age=31536000; "

        # SSL Support
        SSLEngine on

        SSLProtocol all -SSLv2 -SSLv3
        SSLHonorCipherOrder on
        SSLCipherSuite HIGH:!aNULL:!MD5

        SSLCertificateFile      /certs/examplecom.crt
        SSLCertificateKeyFile   /certs/examplecom.key
        SSLCertificateChainFile /certs/class3.crt

</VirtualHost>
Tag(s): apache, ssl, https

July 20, 2014 10:30 AM

Raymii.org

Building HA Clusters with Ansible and Openstack

This is an extensive guide on building high available clusters with Ansible and Openstack. We'll build a Highly available cluster consisting out of two load balancers, two database servers and two application servers running a simple wordpress site. This is all done with Ansible, the cluster nodes are all on Openstack. Ansible is a super awesome orchestration tool and Openstack is a big buzzword filled software suite for datacenter virtualization.

July 20, 2014 10:30 AM

Chris Siebenmann

HTTPS should remain genuinely optional on the web

I recently ran across Mozilla Bug 1041087 (via HN), which has the sort of harmless sound title of 'Switch generic icon to negative feedback for non-https sites'. Let me translate this to English: 'try to scare users if they're connecting to a non-https site'. For anyone who finds this attractive, let me say it flat out; this is a stupid idea on today's web.

(For the record, I don't think it's very likely that Mozilla will take this wishlist request seriously. I just think that there are people out there who wish that they would.)

I used to be very down on SSL Certificate Authorities, basically considering the whole thing a racket. It remains a racket but in today's environment of pervasive eavesdropping it is now a useful one; one might as well make the work of those eavesdroppers somewhat harder. I would be very enthusiastic for pervasive encryption if we could deploy that across the web.

Unfortunately we can't, exactly because of the SSL CA racket. Today having a SSL certificate means either scaring users and doing things that are terrible for security overall or being beholden to a SSL CA (and often although not always forking over money for this dubious privilege). Never mind the lack of true security due to the core SSL problem, this is not an attractive solution in general. Forcing mandatory HTTPS today means giving far too much power and influence to SSL CAs, often including the ability to turn off your website at their whim or mistake.

You might say that this proposal doesn't force mandatory HTTPS. That's disingenuous. Scaring users of a major browser when they visit a non-HTTPS site is effectively forcing HTTPS for the same reason that scary warnings about self-signed certificates force the use of official CA certificates. Very few websites can afford to scare users.

The time to force people towards HTTPS is when we've solved all of these problems. In other words, when absolutely any website can make itself a certificate and then securely advertise and use that certificate. We are nowhere near this ideal world in today's SSL CA environment (and we may or may not ever get there).

(By the way, I mean really mean any website here, including a hypothetical one run by anonymous people and hosted in some place either that no one likes or that generates a lot of fraud or both. There are a lot of proposals that basically work primarily for people in the West who are willing to be officially identified and can provide money; depart from this and you can find availability going downhill rapidly. Read up on the problems of genuine Nigerian entrepreneurs someday.)

by cks at July 20, 2014 04:12 AM

Everything Sysadmin

Tom @ Austin Cloud Meetup

Do you live near Austin? I'll be speaking at the Austin Cloud Meetup in September. They're moving their meeting to coincide with my trip there to speak at the SpiceWorld conference. More info here: http://www.meetup.com/CloudAustin/events/195140322/

July 20, 2014 01:28 AM

July 19, 2014

Yellow Bricks

Good Read: Virtual SAN data locality white paper


I was reading the Virtual SAN Data Locality white paper. I think it is a well written paper, and really enjoyed it. I figured I would share the link with all of you and provide a short summary. (http://blogs.vmware.com/vsphere/files/2014/07/Understanding-Data-Locality-in-VMware-Virtual-SAN-Ver1.0.pdf)

The paper starts with an explanation of what data locality is (also referred to as “locality of reference”), and explains the different types of latency experienced in Server SAN solutions (network, SSD). It then explains how Virtual SAN caching works, how locality of reference is implemented within VSAN and also how VSAN does not move data around because of the high cost compared to the benefit for VSAN. It also demonstrates how VSAN delivers consistent performance, even without a local read cache. The key word here is consistent performance, something that is not in the case for all Server SAN solutions. In some cases, a significant performance degradation is experienced minutes long after a workload has been migrated. As hopefully all of you know vSphere DRS runs every 5 minutes by default, which means that migrations can and will happen various times a day in most environments. (Seen environments where 30 migrations a day was not uncommon.) The paper then explains where and when data locality can be beneficial, primarily when RAM is used and with specific use cases (like View) and then explains how CBRC aka View Accelerator (in RAM deduplicated read cache) could be used for this purpose. (Does not explain how other Server SAN solutions leverage RAM for local read caching in-depth, but sure those vendors will have more detailed posts on that, which are worth reading!)

Couple of real gems in this paper, which I will probably read a couple of times in the upcoming days!

"Good Read: Virtual SAN data locality white paper" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at July 19, 2014 03:18 PM

Chris Siebenmann

Some consequences of widespread use of OCSP for HTTPS

OCSP is an attempt to solve some of the problems of certificate revocation. The simple version of how it works is that when your browser contacts a HTTPS website, it asks the issuer of the site's certificate if the certificate is both known and still valid. One important advantage of OCSP over CRLs that the CA now has an avenue to 'revoke' certificates that it doesn't know about. If the CA doesn't have a certificate in its database, it can assert 'unknown certificate' in reply to your query and the certificate doesn't work.

The straightforward implication of OCSP is that the CA knows that you're trying to talk to a particular website at a particular time. Often third parties can also know this, because OCSP queries may well be done over HTTP instead of HTTPS. OCSP stapling attempts to work around the privacy implications by having the website include a pre-signed, limited duration current attestation about their certificate from the CA, but it may not be widely supported.

(Website operators have to have software that supports OCSP stapling and specifically configure it. OCSP checking in general simply needs a field set in the certificate, which the CA generally forces on your SSL certificates if it supports OCSP.)

The less obvious implication of OCSP is that your CA can now turn off your HTTPS website any time it either wants to, is legally required to, or simply screws up something in its OCSP database. If your browser checks OCSP status and the OCSP server says 'I do not know this certificate', your browser is going to hard-fail the HTTPS connection. In fact it really has to, because this is exactly the response that it would get if the CA had been subverted into issuing an imposter certificate in some way that was off the books.

You may be saying 'a CA would never do this'. I regret to inform my readers that I've already seen this happen. The blunt fact is that keeping high volume services running is not trivial and systems suffer database glitches all the time. It's just that with OCSP someone else's service glitch can take down your website, my website, or in fact a whole lot of websites all at once.

As they say, this is not really a good way to run a railroad.

(See also Adam Langley on why revocation checking doesn't really help with security. This means that OCSP is both dangerous and significantly useless. Oh, and of course it often adds extra latency to your HTTPS connections since it needs to do extra requests to check the OCSP status.)

PS: note that OCSP stapling doesn't protect you from your CA here. It can protect you from temporary short-term glitches that fix themselves automatically (because you can just hold on to a valid OCSP response while the glitch fixes itself), but that's it. If the CA refuses to validate your certificate for long enough (either deliberately or through a long-term problem), your cached OCSP response expires and you're up the creek.

by cks at July 19, 2014 04:23 AM


Administered by Joe. Content copyright by their respective authors.