Why I CrossFit

November 4, 2010 1 comment

I haven’t blogged in a while but a recent thread in a CrossFit group on Facebook had me thinking; why do I crossfit?  Here is my reasoning:

I typically find myself at a loss for words, when asked to explain it.

I can talk about losing ~50-60lbs, gaining muscle, stamina, loosing 5″ from my waist, etc etc but that doesn’t explain how different CrossFit work outs are. How each and every one of us encourage each other to push hard through each work out. Working out at Globo gym is done alone, there is no team, it’s you and the weight, solitary. In CrossFit my peeps at the gym are my teammates, not competitors, they want me to do the best I can possibly do on a WOD as I can.

Like many sports one does not get beat by the competition, you only have one person to blame for losing, yourself. One does not lose a round of golf because Tiger played well that day, you have no control over that, you didn’t play well enough is why you lost. In CrossFit it’s the same thing, if I come second in a WOD or last in a CrossFit Comp, I wasn’t beat, I beat myself.

Doing the best I can is why I keep pushing and why my team mates keep pushing me and THAT IS WHY I DO CROSSFIT.

Granted the results do speak for themselves too:

Paul Before CrossFit

Paul Results

Advertisements

VMWare Needs a New File System

I can’t be the first person to say this, but just in case I am, here’s what I’m thinking.

I don’t think VMWare can continue on their current path, focused very much on the cloud, without a new approach to storage.  Why?  Simple, cost and scale.  By scale I don’t mean 10, 100 or even 500 nodes, but thousands of nodes.  Current solutions involving block storage with VMFS or NAS with NFS rely on costly external systems from the likes of Netapp, HP, EMC and others, this leads to complexity, additional cost and limited scalability potential.  I don’t say this lightly, but as someone who’s actually designed a cloud computing offering on VMWare I’ve seen the limitations first hand.

What I’m proposing is a replacement file system to VMFS, one that’s not tied to the traditional SAN/NAS approach to storage.  What would this look like?  Many of the very systems that run VMWare can have varying amounts of local storage, from 1 drive to massive 32+ drive internal SAS arrays.  The problem with using internal storage is none of the other hosts can see the VMDKs for VMotion etc.   What I’m proposing is leveraging all of those drives and spindles into a large cluster of storage that each VMWare ESX host can access.

To be clear, I’m not talking about simply having a VM run on each VMWare host as a means to this end but rather for VMWare to natively support making use of all this storage in VMWare directly, in much the same way VMFS is layered on top of a LUN, the local storage in a standard server could be clustered together to form a large pool of storage across a 10Gb network.  This addresses cost and scale – Cost, simply adding 4+ 1TB drives to a typical 1U server isn’t very expensive.  Scale because with this approach every time you add a ESX host you’re adding storage , CPU and networking, not just CPU and networking.

What would this file system look like, perhaps like IBM’s GPFS, or Apache Hadoop (HDFS) (Not a fan of the single Namenode, but that’s a different blog post), or something completely new.  I believe something completely new would provide increased flexibility rather than making one of these off the shelf solutions fit, but the general approach would be the same.  I’m not talking pie in the sky, this is how IBM runs their large Super Computers, if it can be done for those, it can be done for this.

Each VMWare host becomes part of the greater storage cluster, not at the VM level but natively within ESX itself.  Think of the VMDKs as objects, replicate the writes across the data center, VMotion would work in much the same way as it does with VMFS today on a traditional LUN.  Or even better have VMWare provide the means for storage companies such as IBM, HP, EMC and others provide their own “File System Plugin” – storage becomes software, in the same way that servers, firewalls and network switches are now software thanks to virtualization.  Virtualize your storage on your virtualization platform, not externally.

Take this one step further and you could have different ESX hosts with different storage types – Some nodes with SAS, some with SATA and yet others still with SSD.  It would even be possible to have non-storage nodes, where they don’t contribute to the overall storage in the cluster, but provide additional CPU to the cluster to run VMs, nodes could even be dedicated storage nodes that don’t run VMs, but that’s not as ideal to me as having every node have some storage.

Another step up in the stack VMs could be assigned to “storage types” so that a “Database VM” could be assigned to storage of the “SAS” type and possibly mixed type of SSD/SAS or SAS/SATA and ILM approach native to VMWare.  VMotion, DRS, etc all become aware of the VMs storage needs, VMWare would be aware of the storage the VMs are provisioned on and become innately aware of the performance that the storage is providing to the VMs themselves.  Allow for multiple blocks to be stored on multiple nodes depending on redundancy requirements.  Have an important database?  Then keep 3+ copies distributed across the cluster, have a simple web servers, perhaps keep 2 or even just the 1.

Extend this to the data center level, replicate data to other data centers and have your “active” file system in your production data center and your “backup” file system in the other data center hundreds of kilometers away.  Your not replicating the entire file system on a schedule, you’re replicating the block writes to the VMDKs in the clustered file system.   Taken to the extreme this would allow you to run the application in any data center at any time with little more than a VMotion to the other data center.  Now it isn’t about having a “production data center” and a “DR data center” but rather running the apps in the data center that is most suited to the given work load, or possibly the data center that is currently less per KW-h.

EMC’s recent announcement of VPLEX achieves some of what I’m after, but it’s yet another box(s) that isn’t directly part of the VM infrastructure.  From what I’ve read it also seems to be an FC solution, so again, not addressing the complexity, cost and scalability issues inherent in an FC deployment.  Scaling FC to thousands of nodes isn’t practical for many reasons; a single clustered network storage option would address that and more.

Perhaps I’m dreaming, but I think this is completely doable, VMWare just has to realize it’s needed.

First Sous Vide Failure

February 9, 2010 Leave a comment

I cooked the flat iron steak last night until this morning for 12 hours at 134F and cooled it quickly in an ice bath this morning so I could heat it up, sear it, and serve it for dinner tonight.  Lesson learned, since flat iron is quite a well marbled piece of meat 12 hours was FAR too long, it was one notch from being mush and simply couldn’t be eaten.   We live to learn, next time 2 hours tops for the flat iron.

The 24 hour flank turned out to be a winner though.  I still need to tweak the seasonings a little, but it was tender almost fillet like.

Tomorrow night I’m going to do a chicken breast and this weekend will be salmon. More on the way…

Categories: General

Sous Vide Experiments Continue

February 7, 2010 Leave a comment

Lamb Sous Vide Pan Seared in unsaled butter and finished with blowtorch

Tonight I sous vide Ontario lamb chops at 134F for 2 hours, they were great!  Mint and garlic marinade penetrated the meat like nothing I’ve ever experienced.  After this I’m excited to try a rack of lamb.

I also picked up hunk of flank steak as well as a flat iron.  I’m very curious on the flat iron since it has a great amount of connective tissue I expect it will sous vide very well, time will tell.

Currently have the chimichurri marinated flank steak in the SousVide Supreme and will be cooking it for about 30 hours.  Not sure how I’ll sear it, might use the pan, but the blowtorch is proving very effective.   Will be making fajitas with it for tomorrow night’s dinner.

I get most of my meat at Medium Rare, they’re at Dundas & Kipling and are my go to butcher.  If you’re looking for very good premium meat, I can’t recommend them enough.

I’m going to give salmon and chicken a go later in the week, a man can only eat so much red meat!

SousVide Supreme Unboxing & First Use

February 6, 2010 2 comments
SousVide Supreme

SousVide Supreme

Before I get into the SousVide Supreme I believe a little tutorial on sous vide cooking is in order.  Sous vide is French for “under vacuum” and is a culinary technique where by food is cooking for longer periods of time under precise degree accurate temperature control.  Think of it as effectively like poaching but under much more lab quality temperature control.  Food is placed in vacuum bags and left in a water bath at exact temperatures for prolonged periods of time.   Many foods can be cooked sous vide from beef, chicken and pork to fish, seafood and vegetables, as a result of cooking under a vacuum seal there is little air and flavours are locked into the food.

Another benefit of sous vide cooking is how forgiving the technique is; it’s almost impossible to overcook foods.  Take a 1″ steak, cook it for 1 hour at 130F and you’ll have a medium rare steak, hold it in the water bath for 8 hours and you’ll still have a medium rare steak.  How?  Simple, the water is being held at EXACTLY 130F so the meat can’t cook beyond that temperature.  This is particularly beneficial for seafood which can be tricky to get “just right.”

So you might be asking yourself why you haven’t been cooking sous vide at home for years?  Simple, cost.  The typical equipment required to cook sous vide are lab quality thermal circulators from companies such as PolyScience cost well over $1000, and look like lab equipment.  The SousVide Supreme is the first home appliance designed for cooking sous vide, at $450 it’s not a cheap home appliance but far from the $1000+ that a PolyScience re-circulator costs.

PolyScience Immersion Circulator

Now onto business, unboxing:

The box.

Box opened

Box Removed.

The SousVide Supreme

None of what I’ve read about SousVide Supreme online covers the build quality; after all when you’re paying $450 for a counter top appliance there are certain levels of quality one expects.  I’ve got to admit to being a little disappointed with respect to the build quality of the SousVide Supreme.  It’s not poorly built but it seems like costs were cut.  The lid is quite thin, and doesn’t scream “expensive” – The plastic handles are cheap plastic too.  I don’t think anyone would say it costs $450 if they had to guess.

SousVide Supreme Control Panel

I’m not a fan of membrane keyboard/keys and the unfortunately this is what the SousVide Supreme uses.  They work well enough but expect they’ll wear out, like all membrane keyboards do, eventually.  Granted the buttons on the SousVide Supreme won’t get used very often but is again something that doesn’t scream $450 appliance.  Build materials matter, but in the end I’ll leave it up to the food to decide over all verdict.

First Cooking Experience

Amazing Egg Yolk, whites not so much. Cooked @ 147F for 45 minutes

As I was out of vacuum bags I decided to try a boiled egg, heated the unit up to 147F, placed 4 eggs inside and waited the 45 minutes.  The whites didn’t turn out anything but runny and rather awful, the yolks were nothing short of amazing with a constancy more like Nutella than typical yolk.  Very good, but with runny whites not very appealing, this will obviously become an experiment of heat:time and trying to find the correct combination.  I tried leaving one of the eggs in a little longer and raising the temp to 150F but that simply resulted in harder yolks, but still nice, with a barely more solidified white.

The experiment continues…

Blowtorched Sous Vide Tenderloin

Blowtorched Sous Vide Tenderloin

I made sous vide beef tenderloin in butter tonight. Cooked it for 1h15m with some butter, salt, garlic and pepper.  I then used my blow torch to crust over the steak getting a great Maillard effect.  The final product was PERFECTLY cooked; there is no way with a grill one could duplicate the consistency that sous vide brings to the table, pardon the pun.  My only regret is the choice of meat, we just happen to have some tenderlion in the fridge so I wasn’t going to let it waste, but tenderloin isn’t a very flavourful cut, and that was VERY evident cooked in this manner.  It was good, but nothing terribly special.

Medium Rare Sous Vide Tenderloin

Tomorrow, perhaps some flank, tri-tip, rack of lamb or maybe some scallops?

CrossFit: Shaping Paul 4.0

January 16, 2010 Leave a comment

We all go through various phases of our lives, childhood, adolescence and so on.  I believe I’m entering my fourth “version” or as I recently started to refer to it, “Paul 4.0.”  Granted my wife and friends might be referring to this as “Mid Life Crisis” but at least I haven’t bought a sports car or motorcycle, yet! 😉 I’m 3 months into the development of Paul 4.0, this “release” of Paul isn’t complete yet but I’m making great progress.  In the last several months I’ve progressed both physically and mentally, I’m still enjoying my CrossFit workouts as much as I did back in October.  Previous releases of Paul are as follows:

1.0: Childhood, we’re all pretty much the same when we’re 1.0.

2.0: Adolescence, I was skinny, really skinny and by the time I was 17 I was also really tall (6’3″ ~155lbs).  When I was 20 I was in great shape, I could swim 2km non stop, and I had put on about 10-15lbs.  I had ~11% body fat.  I still wasn’t very strong, but I was in good health.

3.0: Geek.  I’ve always been a computer person dating back to my days of my C=64.  However I consider “Paul 3.0” my real “Geek” phase.  When I was about 21 I started a BBS and got heavily into my computer, as this is where I saw my career.  It didn’t happen over night but I gained a ton of weight though the 90’s by about 2000 I had reached as much as 265lbs with a 43″ waist, I was BIG and I wasn’t healthy at all.  Drinking in excess of 3-5 cans of POP per day.  Ugh. During a physical in 2003 I was diagnosed with VERY high Cholesterol so high that I had to be put on Crestor, this is what led to Paul 3.5

3.5: Geek IMProved (aka GIM for those Linux Geeks): I consider v3.5 not a full new release of “Paul” but rather a little improvement.  I cut drinking sweetened POP, and got used to sweetner.  Started to watch what I eat, my weight started to fall getting to around 225lbs and a about a 38″ waist.  I barely drink POP any more, diet or otherwise, perhaps 3 can per week of diet Coke a far cry from the 20 or so cans of full sugar fructose laced pop per week I used to drink.  I maintained this pretty good for a couple years; however, watching what I ate was becoming difficult, I like food too much, I’m a foodie.  By  the summer of ’09 I was at 237lbs and 40″ waist.  It was at this time I decided to take up exercise so I could have some foodie fun and generally feel healthier.

Black Tux4.0: Fit Geek: I started to run in late July of 2009 with a training plan designed to work up to a 5k.  Called the “Couch to 5k” I followed this for a few months, highlighted by a 5k run over the Golden Gate Bridge during VMWorld in early September.  I was running ~3-4 times per week.  By October it was starting to get cold outside and I knew that the running wouldn’t extend through the winter, not to mention running wasn’t giving me the results I wanted.  It was at this time that I was introduced to CrossFit.  I won’t go on about that, I’ve posted about that previously.

Now that I’m 3 months into doing CrossFit where am I now?  I’m down to 219lbs and if my Omron Body Composition Monitor is to be believed I’ve put on around 5lbs-6lbs of muscle, my waist is now down to around 37″ and still shrinking.  My resting heart rate is now 62 beats per minute according to my Garmin heart rate monitor, that’s considered “Excellent” according to various sources I’ve Googled.  I’ve got FAR better definition in my chest and shoulders and my build is starting to show that classic “V” shape rather than my previous “b” shape. 😉  I can now see my traps poking up above my shoulders, my arms are more cut, my chest is hard.  My endurance has increased, my strength has increased, granted sore muscles have increased too, but that goes with the territory.  No pain no gain really does apply here, but it’s good pain, it’s discovering muscles you didn’t know you had, it’s looking in the mirror and being proud of what is looking back at you. Hard work pays off.

I credit CrossFit format with keeping me interested in staying in shape this long.  The difference between Square One CrossFit, or any CrossFit gym, and a regular “big box” gym is what has me coming back.  Constant encouragement, not just from Kristine, Chris and Scott, the trainers at Square One CrossFit, but from the other members too.  We all push each other and we all congratulate each other for a job well done, we actively want to see each other succeed.

I’ve completed some grueling workouts lately including the infamous “300 Workout” made famous by the movie.  I’m going to the gym now 4-5 times per week.  I’ll be working on Paul 4.0 for a good long while, but I’m still that geek, I just don’t fit the stereotype as closely as I used to.

Categories: CrossFit Tags: , ,

Not All High-Def Is Created Equal: Bit Rate Matters!

January 10, 2010 Leave a comment

I’m an early adopter, always have been.  I’ve been enjoying high-def TV since 2001.  Back then there wasn’t much high-def to enjoy, Nash Bridges anyone?  But watch it I did.  These days we’re bombarded with High-Def choices from traditional Cable and Satellite offerings and Blu-Ray to online offerings such as You Tube, Hulu (not in Canada, grumble grumble), and many many others.  Recently telephone companies such as Bell Canada have also been entering the high-def TV market with IP TV offerings or VDSL.

All of these services offer high-def that range from 720p to 1080i and 1080p but the one biggest missing piece they don’t provide is bit rate.  What is bit rate you ask?  Bit rate reffers to the number of bits that are conveyed or processed over a given duration of time.  In computer networking terms your bit rate is your upload and download limit, as in 5Mb/second or 100Mb/second.  When you talk about the speed of your DSL line at home you’re talking bitrate.  When it comes to video content bit rate also plays an important role, as it is a sign of picture quality that’s far more important than resolution (720p/1080p/etc).  I’d much rather watch a full bit rate DVD 480p video (9.8MB/sec) than a 1080p video compressed for Internet playback over DSL (1.9Mb/sec).  Why?

Bit rate when it comes to video quality tells you how much information is available in the picture you’re about to watch.  Compression plays a major roll in allowing us to watch video over low speed (DSL) links, even technologies such as Blu-ray use compression because the movie needs to fit on a Blu-Ray disk.  And some compression such as MPEG4 (aka H.264) do a better job (less quality loss and more compression) than others such as MPEG 2.  Blu-ray’s bit rate comes in at 40Mb/second and offers the most stunning picture of any home based High-Def format, but it’s still obviously massively compressed with the uncompressed rate of 1080p 12bit video being 2.5Gb/second.   How do these other “high-def” delivery mechanisms compare to Blu-Ray and uncompressed 12bit video?   Here’s a handy reference list:

Uncompressed 1080p Video:  2.25Gb/sec

Blu-Ray: 40Mb/sec

Over The Air HD: 19.2Mb/sec

Rogers Cable: 10Mb-16Mb (Depends on channel)

Bell ExpressVu: 12Mb/s -15Mb/s

DVD: 9.8Mb/sec

Bell Entertainment (VDSL): Unpublished and no documented anywhere that I can find.

Hulu HD: 2.5Mb/sec

YouTube 1080p: 1.9Mb/sec

* A note about the above bit rates, with the exception of Blu-ray and DVD NONE of these bit rates are published numbers, I had to use my Google Kung-Fu to find these.  If you know any of these to be inaccurate and can point me to official numbers please let me know in the comments.  I was formally told by Bell Canada that they don’t publish bit rates, and that’s what prompted me to make this post.

Now, that you’re starting to understand the difference bit rate plays on the quality of the image you can see that not all Hi Def is created equal.  From where I sit Blu-Ray, being the best we can get at home (Uncompressed High Def isn’t realistic at home) is the benchmark and all other “high def” sources need to compare themselves to Blu-ray, a sort of “Blu-ray scale” where Blu-ray is a 10/10 on the scale and OTA HD would rank a 4.5/10 and You Tube being maybe 2.5/10.

However more important than a scale I’d like to see companies that offer Hi Def content stop talking purely in terms of resolution.  They must start talking in terms of bit rate;  how else is a consumer suppose to make and informed decision and compare the various high def offerings that are available?  Take Rogers for example, they announced to their base a little while back that they were going to start compressing some channels more than others, that directly impacts the quality of the product being delivered to the customer.

So don’t just look at the cost but take into account the picture quality and by extension bit rate, when you’re trying to choose which feeds those great looking high def images to your brand new LCD or Plasma screen.

Categories: Home Theater