2011 hopes and predictions

I thought I would provide some short and sharp predictions for what I think 2011 will bring (or fail) to bring;

Cloud confusion….still

Cloud confusion will still be strife with a complete lack of true understanding within most organisations of what the cloud really is and can achieve. Expect the beloved vendors to still use Cloud next year as the next big opportunity to not repeat previous mistakes in the Client/Server world. Ones thing for sure Vendors in question will continue to bet their future strategy on Cloud and they will dam as hell send us as much garbage via any form of media possible to ensure that this strategy doesn’t fail.

I expect that even more fancy web portals, API’s and technologies will grow within vendor cloud portfolio offerings and I predict (don’t laugh) we will see more vendors adopting standards within cloud offerings such as CDMI and OVF, however I do not expect this to ramp up to be completely black and white, with vendors still trying to ensure they win the hearts and minds of the consumer with product offerings before putting the consumers flexibility and interoperability first.

Hybrid cloud approaches will be more heavily utilised within 2011, I see potential adoption of hosting Test/Dev in the cloud and outsourcing your Workflow engine to a third party becoming quite popular for organisations that suffer from having Production Virtualisation environments within the datacentre yet lack of budget or flexibility to use this infrastructure for hosting Test/Dev VM’s.

Last but not least expect to see some vendors pushing portfolio offerings less directly and more via the Cloud providers they are in bed with. We’ve seen this with Email, we’ll see it further with PaaS/SasS offerings providing CRM, BI and other business application services (in enterprises not SMB’s).


Wall Street was busy in 2010 with a massive amount of consolidation and acquisition by the big fish, we saw the bidding wars and the surprise tactic purchases for the more innovative smaller fish within the world of data storage. However a prediction is that I doubt we’ll see much change here in 2011, I expect the acquisitions are merely to improve existing portfolio offerings and gain more customers. We may see some of that innovation be integrated into portfolio offerings but I expect this will be to merely to be at a level Par (No pun intended) with the 100 Pound gorilla.

2010 was a year for new optimisation technology offerings within vendor portfolios, top ones being Sub lun tiering and read cache modules. 2011 I predict and hope will bring more innovation to intelligent data placement. One hope is that we see toolsets supporting orchestration/workflow capability to align any sub lun policy to a businesses requirement and logic such as SLA and performance metric, all of course something that we do today but is all done pretty much under a headless chicken project methodology.


Expect to see focus from Virtualisation companies being more within application delivery technologies, and this prediction is relevant to even traditional server virtualisation companies. There will no doubt be a big big push to get customers out to using SaaS offerings whether its vendors core product or its a product they resell to hosting/cloud companies.

This shift to SaaS delivery no doubt provides much better opportunity for vendors to make up for the licensing mistakes made on existing portfolio offerings, it also provides the opportunity to sell more indirect cloud offerings/services. My view is that the vendors whom have developed and sold application delivery technologies will continue to be dominant but at the later end of next year new emergent entrants to the app delivery space will gain traction with Cloud delivered solutions.

Expect the Hypervisor and all of its offered bells and whistle to be more or less abstracted completely, the main focus will be on technology offerings capable of the consumer applying business logic. We have seen this already with product releases from market leading virtualisation companies and we will see probably much more push in 2011 with this for the same reasons as they will push SaaS as the defacto application delivery model, which is to either sell the product better or sell products via hosting/cloud companies indirectly that know how to do it successfully.


Not a hot topic for me but I predict converged networking will still be deployed for the majority, this is a technology that requires changes in IT Efficiency and IT structure which is not something that can be introduced into and IT strategy overnight. FcOE will become “more” popular within the datacentre but I wouldn’t expect it to replace existing storage protocols purely due to cost and nested investment facts of life.

I expect 2011 will be the start of more software driven functionality, what I mean by this is similar to what we see within the world of server virtualisation and that is a move to encapsulating the underlying hardware and concentrating more on intelligent software capable of excepting business logic. This has been seen with Unified platforms in 2010 but I expect that more portfolio offerings will begin to advance on this.

Something new?

I’d hope so but wouldn’t want to predict the unlikely, 2011 is going to be a recovery year for vendors, they will want to play it safe in 2011 to gain any lost revenue over 2009/10. Expect to see obvious innovation from big corps to come from acquisition and not in house R&D, however I’m sure you don’t need me to tell you that. I’d like to see something new evolve, we’ve had server virtualisation evolve but i’d really like to see technology with a wow factor, its been far too long a year to be just hearing about packaged “building block” or Fancy web portals.


I’ve given a few prediction and thoughts as 2010 comes to a close, hopefully it was short and sweet enough. Some of these are more my hopes that I would like to see, i’ve got a lot more I could add but have tried to keep this concise and to the point.

Storage – The missing self service portal link?

At the moment I am doing a lot of investigation into the IT Self service portal in the internally on premise scenario. Before you read this post though Mr salesman don’t get too excited and start licking your lips, the reality is I think that there is still a large amount of problems that exist in orchestration and work flow tools when put into a world of how IT does things today. It takes more than a few fancy bits of Web page and drop downs to convince me that existing tool sets and methodologies can provide a business with a completely self efficient Infrastructure, and also to convince me that I can leave the whole design and planning and go pioneer someplace else in my organisation.

When it comes to fulfilling the Virtual Machine request with a SSP (Self Service Portal) the available marketplace solutions are all capable of delivering this functionality, link some code to use an API, build some VM templates and off you go. However something that I currently fail to see is the detail within the devil (yes I did mean to type that, that way round ;) within the way orchestration portals deal with how the deployment of storage for that VM occurs, how you govern the amount of space that user can have or request, how you ensure your storage is below threshold and many more day to day operational tasks.

It’s all so cheap!

With Server CPU and RAM (supposedly) being so dam cheap, most vendors building tools that support automated self service tools, are under an illusion that because of this anyone that wants a server can have one, however when it comes to virtualised environments and to the spinning rust that most likely stores a VM this is still not the case. Storage is an expensive beast and due to this it needs to be more tightly controlled, if it isn’t you run in to all manor of issue in providing performant services to your business.

My beef with Orchestration tools is that they seem to give the requestor of that service no real understanding of whats happening behind the fancy web portal, take a App/Database set of VM’s for example, a project may well request a VM for this that needs a certain volume of RAM, CPU and then the all important Storage config comes up, its not easy as far as I’ve established ensuring that the storage allocation in such an example would be suitable to meeting the businesses expectation. How do you ensure that the disk subsystem is configured for the request to meet the dedicated IOP’s? How do you ensure that its at the most appropriate RAID/Media type? the fact is the requester won’t. Remember, that VM goes into a big storage array which has a 101 other VM’s running on it, how does the request for a new service via an SSP ensure that these are not effected?

Land of confusion

Maybe i’m missing the point about the purpose and protocol for an SSP, maybe its still something that needs someone with architectural knowledge to be performing the service requests via the portal on behalf of that business requirement? Maybe i’m thinking that “da cloud” really isn’t as self sufficient via a SSP as the industry marketing states it is?

SSP’s will no doubt become more and more prevalent within businesses, we have enough virtualisation and other Infrastructure technology that supports this now to not have the excuse not too, I’m convinced that SSP is definitely the most future proofed way of ensuring that you reduce the large amounts of time lost due to red tape, and I’m convinced that if approached properly, the SSP will reduce significant cost.

SSP’s in an on premise hosted infrastructure certainly do offer slightly more “hands off” requirements, the reality is though even with the HW resources that grow on trees such as the physical server hardware it will always I expect need someone to be ensuring that environment is completely capable of delivering the workloads and any future workloads.


After dipping my toe in the world of orchestration and work flow activity It does appear to me that there is still a large amount of change that needs to occur before tools can do the job they state they can, it will come in time.

My hope is that enhancements will arrive (cheaply) that can negate the need to require as much attention to detail when it comes to storage provisioning, however until then I feel that SSP’s will more than likely be the general gatekeeper for wild cat development environments and will merely be a shopping window to advertise the price of something (usually unaffordable) when it comes to deployment of end to end requirements.