I’m PaaS’ing out……

Dependant upon your common spoken language you may read the title of this post in your head a bit weirdly, it basically is supposed to read “I’m passing out” which means “starting to lose consciousness”, all of this being in jest of course, its just my reaction to yet another PaaS offering with the announcement of a partnership between Salesforce and Vmware to create VMforce.

PaaS emergence

I have written in the past about large quantities about Cloud computing layers, out of all these the PaaS layer opens my eyes up the most as to what Cloud based IT strategy is starting to evolve into in. With the benefits currently today of PaaS and the level of power (and potential carnage) I am starting to think that IT is going to change dramatically in the next 5-10 Years. Changes will occur with how we perform RFIs, setup SLA’s agreements with both our business customers and the provider, define the architectural governance and policy, define roadmap and also how operationally IT responds to the dynamic capability and architecture of a Cloud based computing approach.

The Transition

Changes won’t happen overnight by going and buying a PaaS solution, the reality is that several key measurements and governance processes need to be introduced that probably dont exist yet, before even contemplating a movement of application and services into a cloud, needless to say lack of this or more a lack of this being highlighted by everyone and anyone selling Cloud solutions worries me,  and the industry needs to realise that Cloud won’t provide as much of a cost benefit as it is quotes if it strategy is not employed correctly. Being a realist, cost of IT matters a lot to me, I am responsible in my capacity for ensuring that technology investments are solid and they do not increase IT budgets both on the initial investments and then for ongoing operational costs, Governance and changes to current processes I feel will be required in some of the following areas to ensure cost base is controlled and successful strategy is employed;
  • Code Optimisation – Throw poorly written and lazy code into a Metered cloud and it will end up costing you more money, developers will need to think thin and think economically when developing applications,
  • Budget planning - Today if you experience poor performance with poor code you throw more Infrastructure resource at it. This resource (if virtualised) has been procured and is an available pool that can be used. In the Cloud because you have moved into an OPEX Pay per use model, budgeting can skew due to applications either consuming too much resource or taking longer to run,
  • Testing and UAT – More emphasis will be required on testing the performance of an application targetted for Cloud than what is done in today’s world. I see emergence on testing being performed to ensure applications are more efficient rather than proving that they can scale to X amount of users and provide Y amount of resilience,
  • Exit Clauses - The taboo of any hosted/outsourced agreement, this in my mind needs to apply in any agreement from day one as a defacto. Flexibility to be able to move between cloud providers and ensure that your data can be moved without vendor lock-in are paramount to achieving a beneficial cost reduction.
  • RFI – Requesting information on what services can be offered by vendors/suppliers will be quite frankly a waste of time, if a cloud provider is a true cloud provider, sales “examples” and the gumph that comes with it should appear or be available to view on the providers website. RFIs will need to move into focusing less on the product and more on what the service being offered truly provides or more what it fails to provide,
  • RFP - Quite a brash statement this one but in a cloud world is this going to be used? who is going to request and tender for a cloud based solution if my predictions on RFI becomes true?

As you can see i’ve noted just a few potential areas that may change dramatically when introducing IT strategy that is based around a cloud computing world. This is an exciting turning point for IT, I believe to succeed IT governance and policy control will still need to be emphasised strongly within IT departments in order for strategies to be successful and tangible. A lot of this process is done today but just needs to adapt to the model of Cloud computing, its an opportunity to not make the same mistakes made in previous incarnations of IT and lets hope this time its done right!

High Density Virtual Hosts

This post is a bit controversial (which posts on VMlover aren’t?), it is based upon discussing the recent discussing x86 hardware releases and there value prop of being capable of having more total RAM Density for Virtualised environment’s. When I say Virtualised I mean Server workloads that run business applications not VDI.

One of the reasons that this post could be classed as controversial is this, I question whether RAM density is really giving the customer excessive space and capacity that we are being led to believe it can, and when I say this I mean using a box with suitable IO/form factor that we would class as our sweetspot for Virtualisation based on the usual factors such as a comfortable level of Consolidation ratio, price per VM etc ?

Wind the clock back

Four or so years ago when 32 Bit memory address was de facto for a all OS’s, it was also the start of something good with the beginning of Server Virtualisation being accepted within most enterprise datacentres. At this time 64bit OS/Applications were only a twinkle on a distant roadmap or limited to high end Risc based platforms, the transition to running your applications hosted at the time on 32 Bit was also no way just a swing to x64 bit from 32, the vendors had to do large volumes of work for x64 to become a reality.

Today though datacentres are very different, we have 64bit versions of almost every x86 commodity OS going all capable of exploiting larger allocated RAM volumes to allocate for apps and these apps are compiled to  make best use of large quantities of RAM, one example being Exchange 2007 which is designed to make best use of RAM to cache as much feasibly possible without having to use disk and introduce the deadly bottleneck to virtualised environment of Disk I/O.

My controversial bit

So what am I getting at? Well I think we are still regardless of the excessive volume of RAM that is available to populate in recent x86 Hardware we are still unlikely to get the levels of consolidation ratio in the real world that the marketing from the manufacture uses as the key selling point.  I think we see a relative curve of new x64 gen OS and Applications requiring more higher volumes of RAM to run sufficiently as a baseline minimum requirement, we also have an explosion of growth in required RAM resource to operate applications on x64 bit due to demand for optimal performance and more concurrency.

In Virtualised server environments VMware vSphere allows assignment memory volumes to VM’s that are suitable to run Tier 1 workloads such as Exchange, SAP and Oracle. Again as with the OS/APP stack In the past, a Virtualised environment comprised of in earlier incarnations of smaller under utilised VM’s, now you have VM’s with 12-16GB of RAM which run extremely happily and the more RAM the merrier, so overall larger VM’s mean less VM density per Host. Within Virtualised environments you also have the factor of eggs in one basket scenario and implication of outages on hosts storing large density VM counts, from which has been discussed many times across the blogosphere, some architects and designers prefer a more risk adverse strategy to deploying with this generally being down to SLA expectancy from your business. Personally if you have a resilient and highly available environment built with failover in mind using a VM host with high capable density is certainly acceptable and something that can be implemented.

So to summarise my controversial view, I fill that OS/App and VM RAM sizing requirements is relative to how big the host grows so will not allow you to achieve what you think is possible due to this.

Some future solutions to this problem

I’m going to digress away from the existing facts I’ve stated effect the Host size against VM density argument, and talk about future directions that may ensure we can achieve higher density with hosts.

Removal of the OS or the level of bloat that is required to run an OS would be a start, this is something which has been a vision even back in 2007 by Mendel Rosenbaulm. Reduction of the OS means running the application on finely tuned minimal footprint operating environment which uses only what the application requires. This strategy is achievable today either with JeOS or dare I say it for Windows environments with 2008 Server Core, Remember though having a thinner less feature rich OS usually means it has no GUI and the admin needs to rely on pure CLI commands to operate the environment. In future vendors need to focus to the same way VMware has architected its management strategy with ESXi. ESXi has API to use externally controlled and managed management tools such as vCenter, PowerCLI, additionally it utilises open standard interfaces such as CIM and WS-MAN for managing hardware components.

An obvious is developers could of course work on making applications more efficient, this is tough though, the industry has a shaded past to shake off with apps being compiled to run within the Windows world. This however has had the flip side of meaning it is easier to manage and easier to deploy. My prediction is the next step to more efficient applications is within development of cloud based applications where metered OPEX cost of running applications will drive efficiency.

At a different altogether layer to approaching the problems of excessive memory consumption by the OS and Application, VMware vSphere includes great technology which can provide a more denser environment, this does so with the following key features

  • Transparent Page Sharing – VMware environments can reduce the amount of memory used physically by sensibly knowing memory pattern allocation and sharing this between VM’s, this removes a large volume of noise that would occur in non TPS based environments such as Hyper-V and Citrix (for now) that can’t do this and can’t overcommit resource.
  • Memory Ballooning – Working as a Virtual hardware driver within a VM, the driver responsible for Ballooning will manage memory allocation based on memory activity occuring within the VM, if memory is requested it works with the Host to provide requested memory (if available), if it isn’t requesting memory it will then interact with the Host to allow it to reallocate memory to another VM that needs it.
  • Memory Compression – A future release that will compress memory pages, Scott Drummonds has detail on this http://vpivot.com/2010/03/01/memory-compression/
  • Flash Caching – I wrote about this and how Oracle Exadata benefits last week, this provides disk caching but at a faster state than standard spinning rust provides

The above do have performance penalties in some shape or form, they are small but it is important to bear this in mind.


Maybe a bit controversial here but its my view, I am a realist and very rarely fall for marketing goop and excessive numbers touted on a spec sheet. I think its important that the industry look to make the workload more efficient instead of using Moores Law to make there products look attractive and better than other competitors.

Oracle Exadata – Complexity Killer?

I thought I would raise some thinking matter and opinions on this after recently attending the Oracle/Sun “Oracle Extreme Performance Data Warehousing” event, this event included a morning of content and slideware highlighting the inhierent architectural benefits of Oracle Exadata 2.0 and how this can be used to improve performance of processing important data hosted on DW’s and OLTP. Additional to the Exadata content of the presentation I was also interested in going to this event to get a vibe on what the future of SUN will now look like with Oracle now completely acquiring SUN.

Exadata history

Since Version 1.0 Exadata (HP Version) I have learnt quite a large proportion about Exadata’s value prop and technical benefits it offers via the various industry coverage and marketing pushes from Oracle, I also have heard great things about it from a #storagebeers founder and brain in a jar @ianhf who absolutely raves (in a grumpy way of course) on the levels of performance that he is experiencing with data warehouses on the kit.

Following the full SUN acquisition Exadata v2.0 Database Machine has surfaced with a large push as this being the IBM P series killer. This now seems to be being pushed more than anything by Oracle and I went along to find out why this is and what differentiates this from both alternative high end based infrastructures and how something like Exadata pitches its benefits above and over SAN.

I will give an overview of whats under the hood and then give you my view on what I think of Exadata and what I think its pros and cons are for datacentre environments.

Technical Overview

Here is a brief summary of some of the key tech features highlighted from the slideware;

  • Oracle Exadata Smart Flash Cache – Exadata v2.0 utilises this technology and is a differentiator to other conventional DB server/storage infrastructure. FaC In Exadata puts “Hot data” which was typically on slower external disk on localised flash, this removes the associated bottleneck when excessive data processing occurs between conventional Server/SAN disk,
  • Infiniband Connectivity - “Exadata cells” or in simplistic terms the Exadata disk shelves use IB to connect between Disk and Diskshelve, this is configured within a grid format with Oracle RAC so scales data across Cells,
  • Utilises cost effective storage mediums – I was led to believe it used Flash as the storage medium, when in actual fact it dosn’t and in fact doesn’t need to due to Flash Cache, it uses SAS/SATA disks to store older less frequently accessed Data and utilises the Flash cache to store “hot’ tables that are in high demand from the DB queries,
  • Scalable building block design - Entry level begins with single shelves and can scale up as and when required, before attending I was also led to believe by the in your face marketing tactics of Mr Ellison that it was a One rack only solution, when in actual fact you can start with one cell and scale up as and when required,

The above is a small list of technical detail summary, and to be honest it may not be completely right or in complete detail, there is a whole plethora of technical benefits that lie within an Exadata, during the event you wasn’t able to kick the tyres on the Exadata, it was at arms length mainly due to the target audience being Data related IT folk and business leaders. Figure 1 is my high level view of where data relevant data resides and where I/O requests flow within the Exadata stack;

Exadata design principles

Oracle quote that the required Infrastructure “plumbing” for an Exadata setup is pure consolidation in a box when compared to typical end to end FC environments, and this is quite apparent, with One rack providing TB’s of storage with 10x more quoted performance gain over conventional infrastructure it quite firmly backs this statement up. Exadata offers a simplistic built from factory set of Infrastructure building blocks which eliminates the complexity and excessive design and planning work to implement SAN based environments.

Exadata my view

So here is my take on Exadata….First statement I make is like most other Oracle product offerings they have the sales tactic of penetrating into a business via top down approach from the C levels in organisations, on the technical front Infrastructure Architects very rarely get to know a product like Exadata, RAC, ASM etc so they push into Organisations is via the Database Admin/Architect. Exadata is no exception to this statement, I confidently say this based on the lack of hands on or “kick of tyre” approach at the Exadata event, also there seems to be a plethora of system integrators who you can engage more with to find out (thanks but no thanks).

Exadata is not sexy, its not tweakable as its already tweaked, it just sits in a rack in your datacentre and number crunches (apparently very quickly). Something the mainframe has done for years and something that has made IBM what they are today. Exadata is not a magic peice of hardware, it is a combination of common sense architecture design and well thought out integration tuning with the all important workload being catered for of the Oracle DB.

I relate how Exadata has been engineered to Graeme Obree and his historic Washing Machine bike , this was a record breaking winning bike built from ideas and unconventional parts, with this being so relative to Exadata the Features I highlighted within the Technical overview are nothing new and mean you could potentially build your own system which achieves similar results, remember;

  • You can buy PCI based Flashcache cards from organisations like FusionIO and use this with Oracle 11g
  • Infiniband switches are readily available and you can implement this to achieve similar levels of performance
  • Working with your data teams you can likely achieve similar performance gains by incorporating hardware initiatives within Exadata

Exadata is not going to go away and Exadata v1.0 proved this, it apparently was a flop in the market place, so much of a flop Oracle released another one!  Importantly I beleive an advantage is its not using heavily developed ASICS or onboard array software and hardware kit and the software has already been developed or is commoditised, something which is a benefit against SAN Array vendors who have invested in customised array software and monolithic arrays. Therfore I expect Oracle to keep banging the drum with Exadata.


Apologies for the long post, I tried to keep it to bare minimum, Exadata although small on footprint really is a finely tuned beast offers people the chance to move away from complex FC based environments for DW and OLTP without sacrificing performance. Importantly thought you must be conscious of the benefits of a SAN that get left behind such as Replication, Cloning, Management ease and many more.

Prediction is that environments for large DW will be the breadwinner for Oracle, and also the breadwinner when put up against Netezza and other DW specialists when in an RFP. For OLTP workloads however there is a slim chance that Exadata would go any further than the Storage teams in an organisation. SAN’s offer benefits externally to just the LUN that is presented, I’ve not covered DR and Continuity in Exadata, that’s a whole new blog post on its own, however SAN will predominantly win on this account and I may blog about this sometime in the near future.