Archive for the ‘General’ Category

The Etymology of Unicorn Tears

May 23, 2012 2 comments

Amazing and wondrous things happen when two like minded, devilishly fantastic and curious individuals have brunch, or sometimes silly things, this is one of those times.  After a few jokes the term “Unicorn Tears” was uttered, actually in reference to some medicine, and it made us think, where did this currently fashionable term originate from?

The phrase Unicorn Tears has been around for quite some time, but for the life of me I can’t find the etymology.  My Google Fu is very strong, unfortunately trying to determine the etymology of this pop culture phrase is difficult.  Google’s search index is absolutely contaminated with references to “Unicorn Tears” thanks to the popularity of the phrase and the use in many recent Internet Meme.

My usual source for such information is typically Urban Dictionary, unfortunately this time around their definition comes up a short, the various meanings are given but no etymology.  I turned to twitter, but the one person that replied simply pointed me back to Urban Dictionary, as if I didn’t check their first.

The two meanings, abridged, from Urban Dictionary:

1) Snake Oil (The most common usage)

2) Male Ejaculate

I’m primarily focused on the first meaning as that’s the usage that is so popular.  So my question dear reader, where did this phrase start?  Movie?  Book?  TV?  Was there a “Ground Zero” Internet meme that started it all?  Surely someone knows, and if you do, please comment below, or tweet me @paulmon.

My guest for brunch this past weekend hasn’t offered me a reward, but I’m thinking impressing her just might get me another brunch where we can discuss similar mind blowing topics.


Random Paul Quote for June 3rd, 2011

Knowing you suck at something is powerful so long as you take it as a chance to improve.

Categories: General

Random Paul Quote for May 31, 2011

Approach life like nothing bad can happen and lots of good will likely be the result.

Categories: General Tags: , ,

Why I CrossFit

November 4, 2010 1 comment

I haven’t blogged in a while but a recent thread in a CrossFit group on Facebook had me thinking; why do I crossfit?  Here is my reasoning:

I typically find myself at a loss for words, when asked to explain it.

I can talk about losing ~50-60lbs, gaining muscle, stamina, loosing 5″ from my waist, etc etc but that doesn’t explain how different CrossFit work outs are. How each and every one of us encourage each other to push hard through each work out. Working out at Globo gym is done alone, there is no team, it’s you and the weight, solitary. In CrossFit my peeps at the gym are my teammates, not competitors, they want me to do the best I can possibly do on a WOD as I can.

Like many sports one does not get beat by the competition, you only have one person to blame for losing, yourself. One does not lose a round of golf because Tiger played well that day, you have no control over that, you didn’t play well enough is why you lost. In CrossFit it’s the same thing, if I come second in a WOD or last in a CrossFit Comp, I wasn’t beat, I beat myself.

Doing the best I can is why I keep pushing and why my team mates keep pushing me and THAT IS WHY I DO CROSSFIT.

Granted the results do speak for themselves too:

Paul Before CrossFit

Paul Results

VMWare Needs a New File System

I can’t be the first person to say this, but just in case I am, here’s what I’m thinking.

I don’t think VMWare can continue on their current path, focused very much on the cloud, without a new approach to storage.  Why?  Simple, cost and scale.  By scale I don’t mean 10, 100 or even 500 nodes, but thousands of nodes.  Current solutions involving block storage with VMFS or NAS with NFS rely on costly external systems from the likes of Netapp, HP, EMC and others, this leads to complexity, additional cost and limited scalability potential.  I don’t say this lightly, but as someone who’s actually designed a cloud computing offering on VMWare I’ve seen the limitations first hand.

What I’m proposing is a replacement file system to VMFS, one that’s not tied to the traditional SAN/NAS approach to storage.  What would this look like?  Many of the very systems that run VMWare can have varying amounts of local storage, from 1 drive to massive 32+ drive internal SAS arrays.  The problem with using internal storage is none of the other hosts can see the VMDKs for VMotion etc.   What I’m proposing is leveraging all of those drives and spindles into a large cluster of storage that each VMWare ESX host can access.

To be clear, I’m not talking about simply having a VM run on each VMWare host as a means to this end but rather for VMWare to natively support making use of all this storage in VMWare directly, in much the same way VMFS is layered on top of a LUN, the local storage in a standard server could be clustered together to form a large pool of storage across a 10Gb network.  This addresses cost and scale – Cost, simply adding 4+ 1TB drives to a typical 1U server isn’t very expensive.  Scale because with this approach every time you add a ESX host you’re adding storage , CPU and networking, not just CPU and networking.

What would this file system look like, perhaps like IBM’s GPFS, or Apache Hadoop (HDFS) (Not a fan of the single Namenode, but that’s a different blog post), or something completely new.  I believe something completely new would provide increased flexibility rather than making one of these off the shelf solutions fit, but the general approach would be the same.  I’m not talking pie in the sky, this is how IBM runs their large Super Computers, if it can be done for those, it can be done for this.

Each VMWare host becomes part of the greater storage cluster, not at the VM level but natively within ESX itself.  Think of the VMDKs as objects, replicate the writes across the data center, VMotion would work in much the same way as it does with VMFS today on a traditional LUN.  Or even better have VMWare provide the means for storage companies such as IBM, HP, EMC and others provide their own “File System Plugin” – storage becomes software, in the same way that servers, firewalls and network switches are now software thanks to virtualization.  Virtualize your storage on your virtualization platform, not externally.

Take this one step further and you could have different ESX hosts with different storage types – Some nodes with SAS, some with SATA and yet others still with SSD.  It would even be possible to have non-storage nodes, where they don’t contribute to the overall storage in the cluster, but provide additional CPU to the cluster to run VMs, nodes could even be dedicated storage nodes that don’t run VMs, but that’s not as ideal to me as having every node have some storage.

Another step up in the stack VMs could be assigned to “storage types” so that a “Database VM” could be assigned to storage of the “SAS” type and possibly mixed type of SSD/SAS or SAS/SATA and ILM approach native to VMWare.  VMotion, DRS, etc all become aware of the VMs storage needs, VMWare would be aware of the storage the VMs are provisioned on and become innately aware of the performance that the storage is providing to the VMs themselves.  Allow for multiple blocks to be stored on multiple nodes depending on redundancy requirements.  Have an important database?  Then keep 3+ copies distributed across the cluster, have a simple web servers, perhaps keep 2 or even just the 1.

Extend this to the data center level, replicate data to other data centers and have your “active” file system in your production data center and your “backup” file system in the other data center hundreds of kilometers away.  Your not replicating the entire file system on a schedule, you’re replicating the block writes to the VMDKs in the clustered file system.   Taken to the extreme this would allow you to run the application in any data center at any time with little more than a VMotion to the other data center.  Now it isn’t about having a “production data center” and a “DR data center” but rather running the apps in the data center that is most suited to the given work load, or possibly the data center that is currently less per KW-h.

EMC’s recent announcement of VPLEX achieves some of what I’m after, but it’s yet another box(s) that isn’t directly part of the VM infrastructure.  From what I’ve read it also seems to be an FC solution, so again, not addressing the complexity, cost and scalability issues inherent in an FC deployment.  Scaling FC to thousands of nodes isn’t practical for many reasons; a single clustered network storage option would address that and more.

Perhaps I’m dreaming, but I think this is completely doable, VMWare just has to realize it’s needed.

First Sous Vide Failure

February 9, 2010 Leave a comment

I cooked the flat iron steak last night until this morning for 12 hours at 134F and cooled it quickly in an ice bath this morning so I could heat it up, sear it, and serve it for dinner tonight.  Lesson learned, since flat iron is quite a well marbled piece of meat 12 hours was FAR too long, it was one notch from being mush and simply couldn’t be eaten.   We live to learn, next time 2 hours tops for the flat iron.

The 24 hour flank turned out to be a winner though.  I still need to tweak the seasonings a little, but it was tender almost fillet like.

Tomorrow night I’m going to do a chicken breast and this weekend will be salmon. More on the way…

Categories: General

DELL Latitude E6500/6400 Performance Issues

November 30, 2009 2 comments

Slashdot just posted a story about performance issues with DELL’s Latitude E6500/6400 series notebooks; this is noteworthy simply because I have/had the EXACT same problem.  DELL replaced my motherboard twice and cooling system once and the problem finally went away, for how long, who knows.  Screen shot below of my system when the problem was in full swing.  You can see Intel Speedstep kicking in to throttle the effective Mhz of the CPU.


DELL E6500 Performance Issue

DELL E6500 CPU Throttling



Categories: General