Friday, 15 January 2010

LUN Provision models and technology change

A fellow bloggist / tweeter of mine who works in storage (see www.grumpystoragre or tweet at @ianhf) recently posted an article around LUN sizing / tiers and asked what we were using... Debate continued on twitter (albeit in short-phrasing due to the max size of at tweet post) so i thought I would put some of my thoughts down around where we are with lun sizing, how we got there and where i feel we may need to adapt to adopt new sizing.  


This is really just a rambling note throwing some food for thought out there - feedback and ideas muchly welcome!

So... where to begin... 


Back in the dawn of time there was an incumbent, and said technology made use of small bits of logical disk (in this case lets call them hypers) and you could glue these hypers together to make bigger single luns (lets call them meta's) - OK so i guess thats let the cat out of the bag then ;-) With said incumbent due to best practices, max logical limits etc we had carve up sizes (for the hyper) of around  8.4gb - and hence a standard was born.. Charge back models were fixed to this increment and everyone started working in multiples of 8.4gb for all allocations.



New technologies came along, different tiers of storage rocked up etc - but due to the complexity of changing financial models / chargebacks within the org - the same increments of chargeback were used (yep 8.4gig)

So how does our charge / billing system look today (for our traditional tier-1 storage) - well something like this:

  • Tier-1, charged in increments of 8.4gb (with possible meta sizes of 16, 32, 64 or 128) - then glued together with vol mgr
  • Tier-2, charged in increments of 8.4gb (with possible meta sizes of 16, 32, 64 or 128) - then glued together with vol mgr
  • Tier-3, charged in increments of 8.4gb (with possible meta sizes of 64 or 128) - then glued together with vol mgr

In terms of tier (performance), tier-1= mirrored 15k, tier-2=raid-5 10K (3+1), tier-3=raid-5 sata (3+1)

Typically we use products such as VxVM to glue these lun's together (and obviously this incurs a charge also).

Pretty boring and inflexible huh! - well this is where we are at right now!! (we do have some nice new shiny storage (read following paragraphs) but this is still in process of being rolled out mainstream.



Enter the new tranche of technologies such as virtual provision, over provision (thin provision, what ever you want to call it), thin snap, fat-to-thin, automated tiering and a dual vendor strategy and suddenly it all breaks... why - because we have become so rigid in our approach on chargeback and using this pre-determined lun sizes with a chargeback system that has been built around that...(we are currently trying to figure out how we want chargeback to really work over the next 3-5 years).



People no longer want to play for this inflexible amount of disk due to standards we have to impose on then (due to a once rigid technology issue) - they want to pay for blocks they use, but with the ability to grow into as and when (and also with minimum disruption). Its also worth noting - they may now wish to grow into more disk space, but be upgraded in terms of IO performance. Basically our customers want to use the new and shiny products that were mentioned in the paragraph above and have a chargeback model that works.... hmmmmppphhhhhh if anyone has any ideas on that or event better - has something that works, utilising these technologies - then please shout and let use all know (will save a whole bunch of heartache!)



So back to where we want to go with lun sizing - basically to move away from this 8.4gb increment and give the customer what they think they want - as an example they want 20tb usable then we will give it to them (sort of) - but make it thin provisioned and let them grow (single lun - why not, the wide striping that is available on most arrays does the nice back end spread for us) - basically move to an oversubscribed model. Generally we see 40% disk utilisation on most of out file systems...We might even make them pay the full 20tb to start with - as this can pay for the next tranche of seed kit. 



As for tiers of storage and performance - we will probably stick to the same standards we have had (mentioned above) in terms of drive performance - with the obvious addition of SSD's (At some point)...

In terms of volume management - they will be knocking around for donkeys.. I make no secret that I am a fan of vxvm as it has been pretty consistent through the years, however i can see a point soon where native OS volume managers are going to be good enough that we will start using them (wish i could say the same for multipath products!!!!)



So thats where we are i guess.....



But hang on... hold the press (as they say) - there is another change on the way......

So we spend time modifying our chargeback models to cope with these different tiers of storage and thin provision... but there are three more changes that we have heading our way, we may mean we have to evolve again, these being:

  • SSD Caching layer with nothing but SATA spindles (With some top-notch caching algorithms)
  • SSD only arrays
  • Automatic tier upgrade /downgrade at an atomic level (i.e. upgrade frequently used blocks to higher tiers).
These obviously change the chargeback model once more - as we don't have so many tiers (and in the case of SSD only arrays - 1 tier and 1only)...



The SSD only array is interesting, and in my humble opinion where the technology will ultimately be heading... There are a couple of vendors that i have been speaking to (Currently in stealth - happy to share on 1:1 basis where i can) that have some very healthy ideas and some great technologies... I'm also not (a) totally mad or (b) have shed loads of money to throw at SSD only - the reasoning goes like this:

  • SSD pricing will fall more and more as it is put in commodity machines such as desktops (not enterprise drives i know - but there is a knock on (just note what happened with SATA)
  • With all the de-dupe / compression that is now viable, the io profile of ssd (on the premise of size of todays data set) we will possibly be able to use less drives (also depends on how de-dupable your data is!)

I guess in reality - the first step will be combination of ssd and sata if you invest in the relative near-time.



So - there are my rambling notes... 



One other point of mention (and note that i put this on my previous blog entry) - in order for us to facilitate all of this new shiny technology and be able to truly offer chargeback - we need to see the tools that allow us to charge by $/TB of data/information stored and not charge on physical allocation... More importantly the vendors need some way of charging on that metric also... no more dealing in $/TB physical please!



Have a good one...

Stu. 

2 comments:

  1. So, have you thought about how charge for DeDuped data for example? Often different business units are storing the same data for reasons known only to them at times; if we charge for the data stored, are we not at risk of double-dipping.

    One of the problems with charge-back models is the storage department actually ends up as a profit centre...this never goes down well with finance and users.

    ReplyDelete
  2. @storagebod - yep agree with your comments around being a profit centre and also how do you charge for de-duped data....

    Part of this is in the tooling, and the shift that we have to make.. rather than paying for $/TB for tin, it needs to be data stored.. and the 2nd part of this is that of vendor tooling - to show us de-duped data so that we can actually charge for infrastructure that is used by a given user...

    Bottom line - we need the vendors to give us a tool that tells us how efficient there software / hardware combo is - so that we can instrument and charge accordingly..

    Of course - this lot only really works properly if you have a chargeback model that works (i.e. you are able to get the financial mechinism working within the company) and that you follow a true service type structure.

    Defintely worth talking this one through at #storagebeers

    ReplyDelete