Friday, 30 December 2011

Commoditised virtualisation products / Multi-vendor hypervisor strategy

Its interesting to watch the virtualisation / support / management / deployment tools as the current and emerging tools continue to mature. I wrote an article some time ago around the choices that you have in the market space, an area that was once dominated by VMWare is now being challenged by a number of players (please see previous blog article here: )

Watching the emergence and maturity of product sets has got a number of the consumers & technologists that work with these product areas thinking about how to leverage tool sets, what are the right use cases to consider and obviously the cost implications of doing this (everything has a price - and its about what you are willing to pay for a given tech stack).

There is still no doubt that VMWare is best of breed as a x86 virtualisation and consolidation tool but when you look for others that are "good enough" and when taking into account possible "tax-breaks" that you could leverage things get interesting.

So what do i mean by the term tax-breaks? I am really referring to the fact that certain products that can virtualise also adjust the way that licensed guest products will be charged (in some cases it can be a discounted or an out and out release). Hyper-V gives some great wins when it comes to guested Microsoft product sets as an example - so a combination of good enough and a significant price point drop really adds to the case of going multi-vendor in the virtualisation and management space.

So where else are we seeing this level of challenge - just take a good hard look at the emergence of combined deployment, orchestration and workflow management tools that are coming through the ranks - element tools that are provided for just one flavor of stack are becoming less and less palatable for the medium to large enterprise - tool sets that cover complete sets of infrastructure, can tie into an existing ecosystem and fit in with existing tool sets and processes (process does have to change)has got to be the way to go!

So in summary i guess what i am really saying is:
- Hypervisors are commodity as is the ecosystem that wraps around them
- Understand the use case and make an informed decision based on cost as well as the "nice to haves"
- Consider "good enough" rather than rolls-royce - you vary rarely need *all* product functions
- Commoditise your use case by vertical if you can - VDI, MS-SQL Servers, web services etc would be examples of this
- Consider tax breaks you can utilise based on different vendors (i.e. Hyper-V / OS Licensing implications)
- Management tools that tie to one product set will become a thing of the past quickly

Wednesday, 28 December 2011

Good enough is erm.... good enough!!!! Comsumerisation of DC Products

Why do some vendors not get that commoditisation  / consumerisation of technology is fundamentally changing the way that technology is delivered to an end customer….. 

No longer are we interested in bespoke manufacturing lines that make best of breed silicon / multi-layered circuit boards that translate into bloody expensive product – if  a requirement can  be met with a commoditized product set – then crack on and use it.  Basically we want to start using product that is "good enough" but also at the right price point….

I expect to see this in all product areas that exist in the data centre – Good enough, at the right price point and taking advantage of commoditized technology

Enough said…

Monday, 12 September 2011

When cloud goes wrong - Recent Microsoft Cloud365 Outage

I watched with great interest both the announcements on outages from Microsoft around their office 365 outage and also Google with their cloud offering upshot being they were out of action for good few hours.

For those that didn't know / want to read - there is good commentary from BBC that can be found here:

It was interesting to watch from both a reaction from the general global technology populous around bad press on Cloud (and there we were thinking that Cloud was the second coming, it could never go wrong and it was capable of doing god-like things) but more interesting was the lack of understanding of implications / issues that this raises around cloud computing and something that lots of people still don't want to admit to themselves when buying into this approach - application architecture is just as important as infrastructure (in fact - even more so).

Why did Cloud365 (Cloud362?) break - was it due to poor infrastructure? - could be... Was it due to issues around application design? - More likely, was it due to both application, infrastructure and design around lack of fault tolerant design? - Almost definitely.... OK - so in this instance it sounds like Cloud365 had issues due to DNS outages / fat fingers / - the point is that it shouldn't matter... It should be designed in a way that it doesn't need rock-solid infra - it can just move around to a location where its compute requirements can be serviced.

Imagine that rather than having a discreet DC with discreet network and its own DNS management - the infrastructure was geographically dispersed and the application was able to take advantage of all of these dispersed infrastructure islands and move both app and state-full data WITHIN THE APP LAYER - would a significant outage have occurred - probably not..

If we want to embrace this new computing paradigm - it shows us that this isn't about smart chunks of hardware, sooper dooper resilience, data replication, really smart hypervisors or a whole bunch of other infrastructure offerings. Its about an applications ability to scale out, scale wide, restart, be tolerant of infrastructure failures etc etc etc - Its all about the application architecture, how it is layered on top of infrastructure and how the presentation to the enduser is designed.

If you were thinking that expensive VDC solutions are going to help with this (FlexPod / vBlock / Matrix etc etc etc) they wont. Sure they (well some) are great virtualisation platforms but they are NOT cloud platforms - the apps and layering approaches are what makes a good cloud platform.

Sorry application dev / architect types - afraid a lot of stuff is falling on your shoulders now to achieve this brave new word.

Finally - a few thoughts / assumptions to take into account when designing for this nice fluffy cloud stuff:

- All server infrastructure breaks
- Datacentres break
- Networks break
- Operating systems break
- geographic issues exist

Concentrate on issues associated with the above points and architect with this in mind - and you might just get something that could be cloud compatible.

Finally - Don't try and architect from the bottom up (i.e. from infrastructure layer upwards) but concentrate on the application design downwards and attributes that are required (scale out, resilience, security, data availability, disaster recovery, encryption etc.)



Note to Microsoft (if you happen to be listening - which I doubt) - take the chance to educate the technology community and explain how you are going to fix these type of issues moving forwards (and as a clue - the answer isn't to make DNS more resilient - all would be very interested in how you are going to change application design, management and orchestration approaches to achieve a 100% uptime goal and not a whole heap of new infrastructure stuff!)

- Posted using BlogPress from my iPad

Location:London,United Kingdom

Tuesday, 16 August 2011

Cloud ready tools - yeah right!!!!

Cloud is most definitely the buzzword of the decade. If you don’t have a product that can either provide cloud, enable cloud or do cloud type stuff it appears you don’t have a product (no i dont believe this - but the various vendors landing at my doorstep do!)

Have sat in a number of vendor presentations around cloud infrastructure software and specifically focused towards capacity / monitoring / utilization etc it has really started to p!$$ me off - lots of rubbish being spoken.

A specific product & pitch that was being looked at was a really good tool for looking at VMWare virtualised environments – but over night it had suddenly turned into a “cloud enablement” tool with all sorts of random application and cloud functions that just didn’t add up - yep they over-pitched it!

On pushing the above mentioned vendor and asking around how will/ has this helped my “journey to cloud” I asked a number of questions around product direction and roadmap – a flavour of these follows:
• How are you going to help me understand when I need to burst into external provided services but also allow me to bring back into my DC boundary as I either have surplus capacity or decide to “up-rate” my infrastructure
• How are you going to interface to my service catalogue so that we can ensure that we understand where to place
• How are you going to assist with a service model to ensure that the right customer is put on the right platform at the right time
• How are you going to measure my metrics to ensure that when I pass my internal platform to an external source that it meets the right specification based on an agreed SLA

I wasn’t really expecting answers to all of the above; neither was I looking for complete roadmap answers – just a bit of a view into general direction that the product was going to take to answer some of these things.
It became obvious that no real work had been done to turn this into a “cloud tool” but somewhere along the line, someone in their marketing dept had decided that virtualisation=cloud and it was job done!
Needless to say – the vendor did in fact NOT have a great cloud story – what they did have was a pretty good virtualisation capacity management tool…
Lesson – this is an evolution – things don’t just convert / change / evolve without input… Look at where you want to take the tool and work to get it there. Don’t just flip the name in the hope that some unsuspecting customer will just fall into the trap and buy the slideware
Nuff said!

- Posted using BlogPress from my iPad

Sunday, 14 August 2011

Competition to VMWare - There is choice

Its really interesting watching the emergence of new hypervisor, virtualisation and "cloud" tools and the way that other companies that are in this space have been dismissive of new tech / options open to end customers. When i here phrases such as “you dont need to worry about that as we already have”, and “what other choice do you have” from some of these bigger vendors - it can make my blood boil

Well... For the other choices - it seems we have lots emerging. Watching the likes of Hyper-V, Xen and KVM come flying through the ranks really should get VMWare quite concerned - and there is a number of reasons for this, the high level points being:

More management tools are becoming readily available
Customers are changing their development practices for virtualized environments
Maturity in DR and new style of application failover techniques will start removing the legacy fail-over / fail-back paradigm
Standards such as OpenStack / OpenStorage will start becoming prevalent
Open-source will win (look at what happened with proprietary unix models and Linux)

Management Tools - Yeah sure, VMWare have a tool for every day of the week - and some of it is pretty slick - but here is the problem... Large enterprises are now making investments in tools for enterprise orchestration, deployment and management - with these comes some pretty comprehensive frameworks and ability to do some smart stuff. If you have done a good job at the infrastructure layer, architected environments appropriately, some of these higher end tools such as vcloud director, vsphere etc become negated.

You start taking on products such as KVM / Xen / HyperV etc - lets buy enterprise class tools around them to manage appropriately. If you are using the correct processes, understand a common approach across your infrastructure your tool selection should not only answer your virtualisation approach but all of your platform deployment / management / orchestration needs.

You only need to take a look at IBM Cloudburst to see what is possible with non VMWare tool sets (and the fact that the likes of IBM are willing to support this config also!)

Open-Source will win! Has VMWare become the same as Solaris is to Unix (i.e. proprietary)? Not sure it has yet - however they will need to navigate there strategy carefully to ensure that this does not happen. You just need to look at market pressures and the commoditisiation / consumerisation of products to see what happens in this circumstance. In the same way that Linux is now way more prevalent than the lock-in Unix products, virtualisation tools will go the same way (Hell look at RedHat - they are shipping KVM within 6.1 - why wouldn't you use it if you can wrap the smart management stuff around it?)

Final thought / and tech that is worth following - OpenStack / OpenStorage - Follow these, and understand them... Are they fully matured yet - No!, Are they gonna get there and are they useable - HELL YES!!! These type of approaches will fundamentally change how infrastructure is architected and more importantly how the application layer is delivered. The smarts that VMWare have introduced / customers have used will start declining at some point. The smart way of working is to not to solve the worlds ills at the infrastructure layer, but solve them higher up in the stack at the application and orchestration layer.

Have a good one!


- Posted using BlogPress from my iPad

Thursday, 11 August 2011

My iPad

so... i had to write about it... I finally fell for the rotten fruit brand - and purchased an iPad..... I had avoided the hype and managed to watch various people use different devices. I even played with the ASUS EEE Transformer - which was v good... however, its just not as usable as the ipad...

This thing is brilliant - only downside i have found thus far is a lack of flash - but i guess i knew that before i purchased the device...

grumpy storage - please no nasty comments about apple being the devil ;-)

Have a good one!!!

- Posted using BlogPress from my iPad

Monday, 13 June 2011

And my block cloud diagram (coupled with prior article)


Are infrastructure people doing cloud the wrong way?

So here we are – banging on about the cloud story, flexible infrastructure, infrastructure as a service, platform as a service etc…
Most things I see / here about / vendor presentations are really focused towards how do we make the infrastructure layer more flexible… but are we not just continuing to dig ourselves a hole and perpetuating a legacy way of working?
We are busily trying to re-produce the legacy environments we have today, utilising traditional written application layers that have a reliance on infrastructure layer. Hell even the vendors are doing it – you only need to look at the approach that Oracle are taking with Exadata / Exalogic, VCE with vBlock, FlexPod etc etc……
Surely there is a better way – sat here doing some thinking over the weekend and came to the conclusion that we more and more need to turn this cloud stuff  on its head! (from an infrastructure perspective)
Why are we not provisioning a core infrastructure that is essentially very dumb, and rather than trying to offer  infrastructure services that provide resilience, high availability, scalability, disaster recovery, encryption, capacity planning (the list goes on), let’s start putting requirements in to the App Dev community  around how they have to code to / principles that they should work with? They are a smart bunch – and given the challenge and the promise of more agile delivery would probably jump at the chance to embrace this challenge.
They type of development principles / rules that I had in my mind were:
·         The application will scale out and spin up engines as it needs to become more performance or requires more bandwidth
·         The application will be coded in such a way that resilience is provided at the app layer, and has the capability. If stateful data movement is required – a publisher / subscriber (Pub/Sub)  bus must be used
·         App owners and development folk know / understand their data better than any infrastructure dude can – if they need encryption – put it in at the application layer
·         The application will be coded in such a way that DR is provided at the app layer (Using approaches such a dual commit / write many places etc.)
·         From this point forwards let’s use a converged framework for development (tools / libraries etc).
·         As the application sees BUSINESS TRANSACTIONS go out of tolerance provision some more app instances / put application logic in to deal with this scenario (no doubt some of you grid types are saying we have been doing this for ages ;-)
Admittedly some smart stuff needs to happen at the middle layer to take care of spread and movement of data, ensuring transaction data is published to / subscribed in the appropriate way, but the interfaces between app and infrastructure should be unhooked and we should be taking more advantage of pub / sub message busses to interconnect
Once you start doing this, and all the stuff that infrastructure people worry about starts falling away and the App Dev community don’t have the dependency on infrastructure – the opportunity to be more agile starts becoming reality. Let’s also face it – application development are far closer to the business than infrastructure are
Let’s start ripping out the complexity of tin / infrastructure layer and start moving up the stack and do the clever stuff (just as an aside – one of the things I am struggling with (for the enterprise) is how you ensure that items such as data are catered for in major DR events (i.e. holding state somewhere else) – I still think the concept of Prod / DR may exist for a while – so this needs to be catered for in any model you propose.
At the moment you have the likes of the infrastructure people trying to solve a problem that simply can’t fix it (we can for the legacy way of working – but not the brave new world!)
OK – so maybe this does not solve the legacy problems – but if we start tackling this way of working for NEW application development, and continue with some of the current practices for legacy applications – its got to start improving things

If we carry on doing the things in the same old way, the various infrastructure vendors will continue to empty out pockets of cash, and more importantly we continue to deploy things in the same old legacy way.  We should really re-focus that cash on a better way of doing things.
I am probably preaching to the converted – but it makes sense to me!
And now, finally – for a diagram!

Monday, 23 May 2011

Vertical stacks - and what I expect to hear.....

I am having a few “challenges” getting vendors to understand what would motivate me to purchase their own “vertical stack”…. Oracle, IBM, HP, VCE, Dell and others all have them – and yes, this is aimed at all of you!

When you come knocking at the door selling your wares firstly understand what you are selling and why I would be interested… Understand how the enterprise works and what integration needs to be completed.

What I expect to hear:

·         Yes all the interop testing is managed on my behalf

·         Yes all of the reporting / dial home etc is included

·         Capacity planning tools are included

·         Management is simple and done from one place. All aspects can be handled in one place

o   Hardware configs & management

o   Compute allocation

o   Network configuration

o   Storage configuration

o   Setup of storage replicas for both sync and async – to disaster recovery sites

o   Hypervisor

o   Virtual machine instances

·         Yes it is capable of production  / dr and we have standard patterns for this

·         Yes I can back the thing up

·         Yes I understand how I can build this into ANYONE’s infrastructure

·         All of the standard patterns we have mapped have full ISV support (and reciprocal support agreements are in place)

·         Standard patterns describing all use cases are available (anything out of these use cases is NOT a valid use-case)

·         Documented pre-req’s for my infrastructure – on what is expected to make this work


What I expect NOT to hear:

·         We suggest that product x,y,z will meet your requirements and we can validate it – but you can use anything that you want to (I want a predefined stack)

·         You have to pay extra for monitoring (I want a predefined stack)

·         You have to pay extra for stack capacity planning tools

·         To complete the integration you will need to pay $<Insert lots of money here> (why?? It’s a predefined stack)

·         Oh we don’t have full capability to monitor just yet (but it’s a predefined stack surely you can do that?)

·         We are working on capacity planning models (erm – it’s a predefined stack  - you know the characteristics and can plan workloads)

·         We are working on a roadmap – but we haven’t got one at the moment (erm – you know which components you are going to use, understand the roadmap (yep here it comes – it’s a predefined stack))

·         You don’t have to worry about that, as we take care of it for you – YEAH RIGHT!!!!

·         Every company can use cloud – you just need to plug our hardware in

·         Ahhh yes – we recommend this product – but it is not included as part of the overall cost of stack – you will have to engage with that vendor directly (erm – why…. It is part of the stack, surely, Isn’t it????)


In short – when you come and make this promise to me that a given integrated solution can reduce CapEx and OpEx, meet all of my business requirements, offer world peace and solve the current economic situation – please ensure you can back it up and the business case stands up.

I don’t expect to have to teach you about how the elements bolt together, know more about your infrastructure than you do, and really want the vendor to test my knowledge – not the other way around.

Finally – hypervisor technology is commodity – please remember and plan for this… Customer tie-in is not a good place, as soon as you do this; you are turning a commodity item into non-commodity. As an end customer – I want choice…

If I have to sit quality engineers into meetings to understand how to build kit into our infrastructure and spend a heap of time doing it – the vertical stack exercise has failed….


Tuesday, 29 March 2011

What's the story with IBM Block storage line-up?

A few weeks ago I wrote an article on the confused line up / product alignment that I believe EMC now have with VNX and Symmetrix… I guess if I was confused about the overlap on product there, I am totally confused with the IBM story… So far we have:

-          IBM DS8k

-          IBM DS6k

-          IBM XIV

-          IBM V7000 (latest new shiny)

4 block storage products all fighting for placement. Up until recently XIV was IBM’s new shiny thing with (what appeared to be) DS8k and 6k  taking a back seat where standard open systems needed block storage.

Of course – we now have V7000 out there and IBM pushing the product.

By the way – I am making no judgement call on product technology and if it is fit for purpose – just purely making the observation of a confused product line.

I think some well publicised “horses for courses” statement needs to be forthcoming from IBM so that potential customers can understand which product they should / should-not be looking to utilise.

and in a parallel universe NetApp announce NUCSP.

Today NetApp announce their Unified Compute and Storage Platform (NUCSP)

NetApp have taken their OnTap 8.x platform and advantage of the many cores that are now available within their scale up and out clusters to provide general purpose compute services that are located close to storage.

OnTap 8.x (based on an open source Unix derivative) is now pre-loaded with a hypervisor that allows them to run both their native storage software and also general purpose operating systems.

This is rumoured to also allow them to complete their Bycast integration, not by integrating into the OnTap suite of software, but running as a separate OS instance within the hypervisor

Tuesday, 8 March 2011

Blurred Marketing - EMC Product Line - Symmetrix or VNX??

Is it me or is EMC’s marketing machine blurring the block storage products…. Should I choose Symm or VNX (clariion)??

All the hype was given around VNX recently - how it can scale big (and small) which is blurring the edge between “when do I use a Symmetrix” Vs “When do I use a VNX”. Of course – you don’t appear to have the scale out of engines that you get on a VMAX with the VNX – never the less, if you listen to the sales hype – you get told it can grow big… really really big (who knows – one day you might get more than just the initial controller pair – maybe…)

And then there is the time to market with the VNX offerings, and functionality included – arguably the adoption rate of technology within the platform is much faster than you see within the Symmetrix product line. Admittedly you do get the Rolls Royce service with Symmetrix and a high level of stability – but this is almost starting to turn into a “Risk Vs Reward” discussion – and with the ever increasing focus on cost within many organisations – it may well end up that we see a move from Symmetrix to VNX in the large enterprise space (maybe…).

I would probably help if the sales teams didn’t big up both platforms and tell us that each storage offering can do each other’s roles in life… I am half expecting EMC silos to start competitive bidding against each other within a single set of accounts – which could look very strange!

Feels a little “Austin / Leyland” to me – and I might just end up buying an Austin allegro if I am not careful (For those US Folk that don’t get this – email me and I will explain).



Saturday, 5 March 2011

Vendors - Stop selling from the top down - IT DOESNT WORK!!!

This whole vendor approach of getting into an org at the top level (i.e. senior management) and ramming technology down the technologist’s throat just pisses me off…

Why O why do we continue to see this behaviour from big vendors that should know better….

It does nothing other than generate animosity and rarely delivers value. Collaborative buy in from the right people at the right team is a far better way to work!

It’s one thing making idle promises to senior management about your latest shiny thing around impact on CapEx and OpEx with the added “of course the technology will work” and a good sprinkling of “oh we will make the numbers work for you” – You will get found out!

You want to sell a product I would suggest that the better approach would be:

-          Validate the approach works internally and it is viable (a basic step – but missed by vendors many times over)

-          Test the water with the various technology communities within an org and also get vendor management involved at an early stage – show that a business opportunity exists and technically it stacks up – Internal sponsorship is such a winner!!!

-          Work as a team - Vendor / customer team’s work so much better as you sell the opportunity into the org. If the technical offering stacks up and the commercial model fits – this is a great win/win

-          Then once the above requirements have been fulfilled – start selling at the higher management level

-          Make sure that the business case truly stacks up and can be proven (and make it easy to understand) – and please understand your own numbers – I don’t really want to have to explain them back to you!

-          ALWAYS and I mean ALWAYS include the cost of change…

Please DO NOT darken my doorway with your presence if you have not followed any of the above steps….. Your failure is of your own making!

Rant over - have a great weekend!