Goodbye 2009…hello 2010!

Well what a year, and what a turbulent ride 2009 has been for anyone based within the Technology sector with job cuts and threats, diminishing sale and revenue for our beloved vendors/resellers (meaning less free lunches at VMworld) and general doom and gloom whatever which way you lookwith the increased prices of all important things such as Laptops, Storage, Ipods and…I mean I mean…bare essentials such as food and drink and clothes.

I feel after a rough ride I have come out moderately ok and had a successfull year, I started this blog in January and really felt it wouldn’t go anywhere, I’ve tried blogs before and to be honest had problems writing suitable content that would grab good audience let alone bring sensible comments to blog posts (as in someone not offering to enlarge one’s gentalia or claim a lottery win in Lagos.)

2009 has certainly been an interesting year, I have acheived the following and hope to strive to improve upon this in 2010;

  • Managed to hit approximately 20000 visitors on the blog which if you’d said to me last year i’d have laughed at you,
  • Made a post at least (work dependant) 1-2 times a week,
  • Have had a few Planet v12 top five mentions which was very cool, thanks for these Duncan it was really encouraging, I have also been linked on websites for external resources
  • Had good fortune to meet a lot of interesting people in the blogosphere who have also commented about my content and at least said its good (to my face anyway :))
  • Passed and upgraded to VCP4! great as I can now focus on moving forward and away from practical based qualifications until ESX5 (I hope)

So onto 2010 and some general New Years resolutions for me on the blog;

  • Will hope to publish less technical content and more content which is based on strategy and methdology, this will be more the case I think in 2010 as I’m not hands on now in my day to day role,
  • I have left massive volumes of content out of my blog in 2010 due to me just not feeling it was worth the bother of posting, this has been an irritant when I see someone else post and get kudos on there own blog…I hopefully will not do this as much in 2010
  • I will move off of Blogger!…the CMS is just a frigging nightmare and not worth waisting hours of my year in 2010 on….this may additionally bring a rebrand for VMlover.com, i have decided for the blog to go up a level I need to get rebrand so I can gain more target audience.
  • May try to bring external resource and knowledge from industry players and thought leaders with guest content, this will depend on whether I get interest, if you are interested then do comment on this post.

So I wish you and your family a very Merry Christmas and a Happy prosperous New Year and keep posted on the blog as your presence and comments are gladly appreciated.


Daniel

PHD Virtual – breaking into banking!

Thought i’d share a interesting news feed I found on the PHD Virtual website on a recent success story for the PHD guys http://www.phdvirtual.com/company/press-releases/142-esxpress-ramatically-reduces-seattle-financial-groups-vmware-recovery-time


You may have read my review on esXpress 3.6 back in August, my opinion on them is they are certainly focussed in on resolving general backup pains and problem areas and are certainly serious about backup technology and methodology. If you get time over the festive period I seriously encourage you to have a look at the fully usable product demo.

This customer success story certainly shows that their products are capable of providing enterprise level backup and holding there own on large scale backup demands, look out and expect to see more great product developments in 2010 in the core esXpress product set and additionally I myself wish them continued success in 2010.

Views on Automated storage tiering

In light of the latest introduction from EMC of its latest AST (Automated Storage Tiering) solution (funnily enough named FAST), here are some quick and easy to read documented predictions for you to maybe rip apart on the comments section of this post;

Array design

I believe storage arrays and storage disk layout will be designed and planned a hell of a lot differently than they are planned today. Typically functional requirements will become less and less taken into account when planning storage for an application or planning array deployments from scratch. With introduction of automatic array tiering into mainstream I see design considerations being based more on the overall disk capacity requirements and potential capable over subscription limits that can be achieved, and also ensuring that relevant SLA’s based on workload characteristic are guaranteed i.e. Between the Hours of 10-12 this batch process will get X amount of IOPs

This I relate to in the same context as planning deployment of workloads into a VMware farm that uses shared HW resource capability. The “farm” or “pool” of storage will become the associated norm with the shift of responsibility of the storage admin being more on calculating and understanding the running capacity and available expansion space on the array with the AST algorithm calculating and reporting back the available location where workloads can be moved to back to the storage bod. (it is safe to say though that AST is going to certainly be in manual mode for even the bravest of storage admins for a while).

No thy workload

In most storage environments today we plan and over allocate for the worst case IOPs or MBPs requirement of the workload and in the event of any potential issues arising they get dealt with in a reactive manor. If it does what it says on the Marketed Tin AST will make this type of planning irrelevant, we won’t need to know the workload, it will move it for us.

If limited performance metric on app profile is available in the first place (which I expect), then AST enabled arrays will provide the option to monitor post deployment and then have the peace of mind that you can migrate with no downtime to the running workload. Meaning advanced tiering provides the Administrator with greater opportunity to turn reactive issues into a proactive scenario by having greater visibility of what the application does (and whether the vendor is lying about requirements).

Additionally I expect (and hope) vendors with AST functionality will provide tools which provide expected “Before and After” results of what will occur by moving storage that doesn’t sit on on AST enabled array into an AST enabled environment. I also expect to see an onslaught of third party software companies providing such a facility (if they don’t do this already).

Adoption rate

In my opinion the latest incarnation from EMC of FAST will most certainly not be deployed and adopted in aggressive fashion within Storage infrastructures for a while, the feature is not back supported into Enginuity for DMX3/4 so only the rare customer who has bought a VMAX or CX-4 recently will be one of the few capable of implementing this technology. Additionally FAST isn’t free, expect to see people keep there hands in there pockets until this technology has been proven

Higher storage tier standbys

SSD’s are an expensive purchase, and AST enables the possibility of being able to introduce sharing capability of SSD between workloads (if planned appropriately), you maybe in the position to oversubscribe the use of SSD between applications. If an app that runs in the day needs grunt in the shape of IOPs, it can share the same disk pool with an app that requires throughput out of hours.

Virtualisation and AST

Expect to see AST benefiting and working heavily at API level with VMware, i’ve already written back in July about a vision for this in the following post, the basis of this was that Virtual admins will not have as much concern with placing Virtual Machines upon the best suited storage. Today various LUN’s and RAID VMFS volumes need to be deployed so VM’s can be hosted to match application workload. We may see the need to have a generic based VMFS Volume and AST Technology move volumes based on workload requirements of VM’s on that disk (in time)

Summary

Hopefully I have made some valid points on where I think AST will fit into environments and where it will change general design best practices, I do not use a AST enabled array and will not have the capability to for some time so excuse me if some of the above is already possible or being done.

Exchange 2010 – Infinite Instance Storage

It has been a while since my last post, i’ve been busy on a lot of fronts, i’ve been revising for my VCP4 Exam (which i passed :) ), working heavily on projects at work and also had a holiday.

After my hectic month i’ve now had the chance to catch up with the latest Exchange 2010 product changes and felt compelled to post on what I discovered within this. It appears that Microsoft has removed from 2010 Single Instance Storage functionality/

Single Instance Storage

Introduced in Exchange 4.0, SIS (or Deduplication) ensures that attachment files that get emailed to multiple people are basically only stored as a single master file to avoid storing multiple copies of that file within every user mailbox, so from an overall Exchange Database perspective multiple sent files appears on used storage as just the size of a single file.

You were probably as dumbstruck as I was when I read about EOL of SIS on the lastest Microsoft blurb, and your probably thinking to yourself exactly what I did which was there must be a new type of SIS or a new fandangled name for SIS that either improves upon SIS or even completely new architecture to save on storage consumption all together. Well it appears it was neither of those…I digged deeper and came up with the following blog post to that it is now completely EOL.

It does seem that like me readers of the official Microsoft blog have great concerns on the architectural changes and the side effects within a typical large scale environment of implementing 2007 compared to 2010. Additionally echoed are concerns on what types of problems it will lead to in future within an operational environment day to day. To be frank Microsoft sound a bit blase when providing justification on why they have removed SIS, they seem to infer that technology like SIS is legacy and customers do not actually benefit from storage reductions, and in fact SIS is being removed to provide performance benefit.

How Microsoft can measure that SIS is not usual anymore is beyond me, Exchange customer use cases are all different, but in reality in the field the actual fact regardless of what Microsoft think SIS however small has benefits to reduce storage costs for most organisations, additionally it has made things in Exchange more efficient in other areas such as reducing backup windows and the associated restore times for Exchange databases.

The justification from MS seems to be that today compared to 4-5 years ago Disk is cheaper and bigger and yes they are right, it maybe cheaper when they go and compare this to DAS connected environments I have no issues with this. However my issue is that most large organisations like mine do not use DAS for large scale Exchange and large enterprises don’t do this with Exchange due to some of the following reasons;

  • DAS does not provide Volume snapshot capability for backup and restoration activity
  • DAS Storage volumes cannot be replicated for any purpose to a secondary offsite or local array
  • Backup windows with DAS compared to using a SAN are not even worth providing examples of the difference, backup across the wire with DAS is unquestionably for large volumes of Exchange data going to be slower
  • You cannot clone a DAS storage volume nondisruptively in the background and quickly like you can on SAN, this is usefull for things that you should regulary perform such as Production backup integrity test.
  • You have dependancy with DAS between host and storage, you can move/change a Fibre connected server much easier than DAS.
  • Try providing cache priority or QoS to a DAS volume!
  • Try managing DAS remotely and from central consoles!
  • On a TCO front a SAN most probably provides you with much better cost savings and operational savings compared to having pockets of large storage pool with DAS

I’m not a SAN Bigot (maybe just a bit) but I’m sure some of the above reasons orgs use SAN shows what limitations arise by using DAS in the enterprise and why for applications like Exchange you need to implement such infrastructure.

The example cost hike

So to see what type of cost increase I may experience with no SIS by Upgrading to Exchange 2010 take the example calculation based on 1000 Users being sent a Mail with the Christmas message from the CIO which happens to be a 5MB attachment, this attachment being sent to 1000 people would calculate to theoretically 5GB of storage consumption on the Exchange DB which would be avoided with SIS in Exchange 2007, multiply that example in a typical messaging environment with Carbon Copy of example large presentations, more company announcements with attachments (maybe a Lotus Notes Quotation from Procurement?) etc and it will certainly start to become a very expensive option to use Fibre Channel with something like SIS.

Additionally lets not forget here that most organisations who have deployed 2007 have most likely implemented this on new SAN Arrays which are not likely to be renewed and have the capability to host 2010, a SAN is not unfortunately something you can throw away and replace with DAS, Additionally remember DAS has the hidden costs associated with operational management.

So to summarise on the negative side of this post I am not happy with such functionality being removed, by removing SIS from technology I will have no choice to upgrade to in future i’ve just increased my storage costs by 10% and also increased the volume of disk that I now require in my array moving forward (tough using proprietory tech hey). Lastly to put this into perspective I can’t be bothered to find pricing from Microsoft on Exchange but I am more than sure the price of the software is now not 10% less with 2010 :)

The positive comments for Microsoft from this post

I’m not that hard on Vendors all the time, i’ve got some positive comments here. My positive side of this post mainly focuses generally with the fact features such as SIS moving forward means you should be focusing more on treating your storage strategy and all round planning more seriously with complete Archive methodology.

With a Commercially available Archive solution such as Symantec E-Vault or Quest Archive manager means you can host mail items on lower tier SATA or archive disk storage media, which in turn means you reduce the size of primary Exchange storage and the associated storage requirements of the higher tier level of storage. Importantly however by archiving shouldnt mean you cut your own nose off despite your face and replace SAN with DAS, it still has the tangible benefits across most large enterprise environments for many other reasons.

Summary

Maybe i’m being unfair here to Microsoft with my vendor rants, we have had SIS functionality reducing storage costs for a while wihtout realising and have taken it for granted, I think more and more moving forward we will need to shift to more focus on alternate strategy using Archiving products more and more and be sensible about the lifecycle of storage management within email environments. Longer term it will be interesting to see results from people migrating to 2010 to see if they notice a dent in storage costs if they are using SAN and not the horrible dreaded DAS!