Saturday, 18 December 2010

EMC Customer Council 2010

I have recently got back from EMC Customer Council and thought I would write a few words about the experience (and before you get all excited – this is NOT going to cover any of the product / technical / roadmap / detail side of things!) – Just around the experience of attending.

For those not lucky enough to have attended customer council, a few words about the concept.  This is a once a year meeting where EMC get together a good cross section of customers and discuss roadmap, but more importantly get lots of feedback from customers on various product sets, market trends, understand customer strategies and then feedback into their own strategies to drive product and customer services – so it really drives direction back into the vendor.

The executives of the EMC business are in attendance and also the core strategists and decision makers on product sets are there. Your feedback goes straight to the top and gets listened to (that is of course not saying that all your suggestions make it into end-products 8-).

Now – I am sure people are sitting back at this point and saying “sounds like a bit of a jolly to me” – well you couldn’t be more wrong. The time spent here is intense. Sessions start first thing in the morning, and run through into the evening. – You literally finish the last session and have a severe case of “Brain Ache”.

It’s also not about the customers sitting back and listening, in fact it’s quite the opposite. This forum is all about the customer talking and EMC listening – and it works!!! The sessions are all very engaging; people are frank and honest with their views on various topics. (In fact I think the customer group talks more than the EMC Employees (which is great!))

It should also be said that this is not a customer / vendor love-fest. People disagree and debate on product sets and approaches (inclusive of customer to vendor, vendor to customer & customer to customer), which is very healthy and valuable – and we all keep each other honest.

The council is (for me anyway) as much about meeting your peers in other companies and understanding how they are tackling issues that you may have and learning from each other.

Whilst the main reason is to supply feedback, the dynamic between customers is excellent – every year when I walk away from this session I have gained a wealth of experience not just from EMC – but also attendees – a bunch of very clever people attend – and you learn lots.

This is far from discussion just storage that is discussed – examples of discussion this year were:

  • Cloud
  •  VDI / Hosted Desktop Services
  • Storage
  • Tooling
  • Virtualization
  • Federation of services (and not just storage)
  • Network Convergence
  • Database Technologies
  • Interop
  • Professional services approaches
  • Roadmap of product

The list goes on and on and on.. (And these are only the planned sessions – conversation goes on long after end of day – and continues late into the night (very very late...))

There is of course a social side to this as well it would be just plain wrong if a few beers were not consumed – but that it also part of the fun, and it always loosens peoples tongues – which of course causes more debate :-)

Quite simply this is the best, professional, organized customer session that I attend bar none (and I have done many….)

I have met some great people and would like to think that I have also made some great friends, both peers in other companies and within EMC.

Finally – I guess a few thanks - All the guys and gals at EMC that make this happen (from execs, to techies to show facilitators) and also to all the attendees that make this a great learning experience.

One more comment – if you ever get the chance to attend customer council – DO IT! You will have a great time, learn loads and meet some fantastic people.

Cheers,

Stuart.

Friday, 10 December 2010

Another storage bidding way - maybe??

So it has all started again... Dell back on the trail for owning a bunch of storage. Are we about to get another bidding war. If dell succeed in there mission - there will be another player with their own stack (well hardware anyway...).

The more interesting point will be if their competitors start bidding for the product to keep them out of the stack-wars market. Interesting one to watch!

Cheers,

Stuart.

Tuesday, 28 September 2010

A man takes his Rotteweiller to the vet

I could not resist posting this...

A man takes his Rotteweiller to the vet. 'My dog is cross-eyed, is there anything you can do for him?''
Well,' said the vet, 'let's have a look at him'So he picks the dog up and examines his eyes, then he checks his teeth.
Finally, he says, 'I'm going to have to put him down.'
'What? Because he's cross-eyed?''
No,
... because he's really heavy'

Thursday, 23 September 2010

HP Jumping on the Cloud bandwagon (finally!) - "HP Cloud Start" (HP version of vBlock?)

So,

It looks like HP are finally jumping on the cloud bandwagon and
releasing their own Virtual DC / Cloud brand. I am guessing this is a
response to EMC / Cisco / VMWare's Arcadia setup (jeez are they lagging
behind!).

HP are calling it CloudStart - I guess this move was inevitable, given
the h/w and s/w stack they now own. It looks to wrap up all of the
consultancy, design, build, deploy and then handover will form a part of
this (an assumption on my part).

What is unclear are the components that will be included / excluded as
part of this wrapper... For example will HP's automation tools be
included (Server Automation, Stratavia) etc. We can be sure however that
there will be inclusion of its network tech (formaly 3COM) with some of
its flexfabric modules thrown in for good measure, Server Kit (C7000
chassis if they have any sense) and some storage solution (iBRIX / 3PAR
/ Lefthand??)

Are we in fact going to see a Cloud Start-1, Cloud Start-2 and Cloud
Start-3 (of course - if i had started from 0 - it would be to close a
certain other block virtual data centre offering - and wouldnt want to
start an upset there would we ;-)

I guess there are a bunch of pieces missing from their portfolio if they
want to own the complete end-to-end, the most obvioust being a database
warehouse of some description (maybe they will go chasing after
terradata!) and I am not sure what they have on the security side - some
work here may be required!

So final thought - It will be interesting to see how they bring this lot
to market, and if they can pull it off and get interest from the enterprise.

Lets see what happens.

Cheers,

Stuart.

Tuesday, 21 September 2010

Where technology really makes a difference

For those of you that know me - you know that i have a pretty big love
for all sorts of technology, love the odd gadget or ten and really enjoy
the compute infrastructure area that i work in - its great fun and i
couldn't imagine doing another job that satisfies both my tech curiosity
and the ability to work out why things do what they do...

Basically - I think technology is really cool - and I just love playing
with it.. However, i have now got a new appreciation for technology and
how it can help us (shame it took me 38 yrs to build this view!).

You may not be aware that my wife (Jill) gave birth to our 3rd child on
Thursday 16th September 2010 to Holly Isabella. As you would expect a
proud father to say, she is truly stunning and has changed our lives
forever.. And Jill, well she just continues to amaze me - after giving
birth and having her body put through physical stress that i can only
imagine - she was up and around 4 hours after giving birth, taking care
of Holly, fussing over our other two children (Conor and Abbie) - the
next day, back home and with the exception of taking a few paracetamol
was walking around as normal (like i say - pretty amazing!)

So - what is this article about, well - the thing that truely amazed me
was the tech that was to be found in the birthing suite where Holly was
born... Was it shiny and fancy looking - No! But this stuff is far
cooler that that - this stuff saves lives, monitors health and really
helps us out!

You could monitor and see what was going on with Holly before she was
born, you could see how Jill was doing, tell when things were going well
(and in some cases not so well, where heart rates started to drop etc).
The combination of all these monitors and machines with the medical
experience of doctors, nurses, midwives etc was just mind blowing. This
tech allowed them to make informed decisions that truly made differences
to peoples lives (not just adding $'s onto some business transaction).

Our No.2 Child was born just over 6 years ago, and the other thing that
surprised me this time round, being in the hospital was the advancement
of some of the machinery...

I guess what i am trying to say is that it was nice to see technology
really make a difference to peoples lives rather than funding some bank
balance, building some war machine or driving some multi-zillion dollar
transaction that makes no real fundamental difference to peoples lives

Anyway - thats it really, other than to say- Baby Holly is doing fine,
Jill is doing great and we could all do with just a tad more sleep!

Cheers,

Stuart.

Monday, 20 September 2010

Another Acquisition war unfolds - The Database Stack

You have to wonder where the next round of tech acquisition is going
to come from. My bet - another in the database world (and this follows
the recent IBM news on Netezza)...

EMC now have Greenplum which has the makings of a very interesting
database machine, Oracle obviously already have Exadata and the
internet is buzzing to the sound of IBM buying Netezza

So who needs one... Well HP Obviously (little bit of humor here - but
just a little!) They have just purchased 3PAR and now have a complete
H/W stack that can offer a virtual data centre infrastructure - they
just need some services to run on it. Database would be a big market
to go play in - and keep the EMC's, IBM's and Oracle's of this world
honest.

So who would HP take a punt at then? My money is on Terradata. They
could afford them (just need to convince the shareholders that it
would be worth the cash, after a big bunfight with dell over 3PAR)

I have to say - its pretty interesting watching this "stack war"
unfold.... Will be interesting to see who the final 4 vendors are that
will control this market. I have concerns about Dell making it - they
have lost out on the storage platform, have showed their cards once to
EMC - where would another acquisition war leave them (especially if
they lost again?)

Cheers,

Stu.

Thursday, 9 September 2010

Cloud services - As long as its x86 only!

The current release of cloud enabling products, automation services,
virtualisation products and hardware stacks that are allowing us to
build internal and external cloud services at the moment is stunning.

There is a whole heap of acquisitions that are occurring, VMWare bring
products such as vCloud Director to bear and spring source for the
development community are driving things into a bit of frenzy (and i
almost forgot to mention that Microsoft are out there winding it up also).

However - I have a problem with all of this brand new shiny stuff / way
of working - and it is just that - its brand new and shiny, and only
potentially accounts for the net-new environments that are being
provisioned (and not even all of these are being catered for (SPARC?
AIX? Sol Zones?)).

The whole point of cloud services is that it is just that - a service,
and should be able to abstract any platform regardless of what hardware
/ OS / virtualisation product / application that maybe running.

Few of the vendors / end users out there are taking notice of the "to
hard to do pile" and are focusing on the easy to do pile (OS's that run
on x86 and can be virtualised by either VMWare or Hyper-V seem to be the
current trend!)

What about AIX?
What about Solaris on SPARC?
What about being able to span Solaris Zones
and the list continues (if you want my full list - please feel free to
contact me)

As for the argument that commodity hardware is where we going and we
should focus on this area (again - read x86 platforms) we have a huge
current estate that covers a whole bunch of non x-86 platforms and i
would want to do the lot... Surely the vendor community does not want to
suggest i now have two support teams, one for new shiny cloud and one
for current way of working? - That's just not viable!

One last thought on this subject - All vendors are now coming out with
there own vertical stacks (such as oracle exadata) - we will see more of
this trend, be it server, database, application - so these cloud
enabling tools had better be able to intgrate with these type of
approaches or that to-hard pile is going to start increase, and the
whole concept of cloud will just fall away.

Cheers,

Stuart.

Thursday, 26 August 2010

3Par tussle continues

The 3Par story rolls on.... It sounds like Dell have now offered a response to HP's bid for 3Par - if the rumors are correct, another cool $100m - share holders must be loving this little tussle.

Still strikes me as interesting that Cisco / Oracle haven't bundled in there yet (Cisco do have a reputation of arriving late but then taking over the party- so lets see what happens).

there is a great article that covers the whole history of companies that have bid for 3par (past and present) that can be found here: http://siliconangle.com/blog/2010/08/25/special-report-inside-the-hp-dell-bidding-war-for-3par-will-the-company-fetch-more-than-2b/

Very much worth a read... An interesting point of note - 3Par have hired Frank Quattrone (previously of Credit Suisse First Boston fame) - who has a bit of a reputation for these type of tech deals (Data Domain into EMC as an example.... look how that tussle when between NetAPP and EMC).

It also seems that the tech buzz around storage is happening all over again - companies such as Atlantis (just managed 3rd round of funding) and Delphix are starting to emerge with some really smart IP... Will be interesting to see if the feeding frenzy continues.

This combined with everyone chasing after the virtual datacentre space - its gonna lead into an interesting period of acquisitions and to see how these companies actually manage to integrate /merge these new acquisitions into their product offerings and make them feel like coherent product offerings

Atlantis: http://www.atlantiscomputing.com/
Delphix: http://www.delphix.com/

Cheers,

Stuart.

Tuesday, 24 August 2010

3Par Acquisition - Interesting times

The bidding war that is emerging over 3Par is interesting to say the least. Firstly the sheer amount of news it is generating is quite mad and the interest from all the tech companies is extremely high(the fact it made it onto the front page of the FT Business section is pretty telling).

 

The vendors that are wading into this little war are also quite interesting – currently HP and Dell – but do we expect to see the likes of Oracle and possibly even Cisco / NetAPP to get involved????

 

A couple of the more interesting areas that the winning vendor possibly gains if this battle is won:

  • The amount of R&D that is purchased / Leapfrog of technology you get built into 3Par’s solution / block storage technology – its considerable!
  • The position they purchase in terms of “storage delivered to the market" – potential to leap frog all except EMC (quantity shipped)
  • Its puts its purchasing company straight into the Tier-1 space for block (and the possibility of giving a viable NAS solution as a result of a take-over).

 

The real point of interest that I see from this – is that of the Virtual DataCentre (VDC) play!

 

Currently – The Arcadia (vBlock - Cisco / EMC / VMWare) venture is really the only show in town that offers the joined up view of Servers / Storage / Network Hypervisor – but this is a venture / and a number of vendors that are playing together nicely to deliver a solution.

 

By purchasing 3Par, a server vendor now potentially owns the complete end-to-end stack (at least from a h/w perspective) – and its only going to take a little cosying up with Microsoft / hyper-v to have a complete end-to-end virtualisation / cloud solution (with that small little matter of management tooling (oh – do HP have Orchestration tools / Automation tools and some other interesting tools to manage VDC environments – yup pretty much…))

 

I think there are really a number of key takeaways from what is occurring in this market space at the moment:

  • The VDC / Cloud stack is still an immature market with many acquisitions and technology changes yet to happen (Hitachi / Arcadia (vBlock) / Dell or HP or ? etc)
  • This is going to take some time to play out and see who the winners / losers / main players will be.
  • Potentially – this is a big disruption in the market area that Cisco are trying to dominate – as it is possible another vendor could own the end-to-end h/w infrastructure (server / storage / network), rather than do this through a partnership (as we know this can cause all sorts of Interop discussions / vendors playing nicely) – With Cisco trying to get into the server market, does this dampen things for them?
  • Now is not the time to buy into a full compute stack, the market is shifting rapidly. IMHO – this will take a good 6-12 months to stabilise and be able to start making judgement calls.

 

That’s all from me,


Stu.

Saturday, 12 June 2010

HP buys instant on OS and Virtualisation capability from Phoenix Technologies

Anyone out there spot the purchase that HP made on 10th June? They
have just brought hyperspace from phoenix technologies. The space that
this tech plays in today is around the NetBook / mobile compute etc
(and of course HP recently brought Palm) - but HP already have tech in
this space...

You have gotta ask what HP are going to do with this, and where are
they going to use it... Possible competitive technology for next
generation tablet technology (iPad competitor) or further development
in the VHD space?

Personally i think they are chasing after the full on virtualisation
space... EMC / VMWare / Cisco are making loads of noise about the
whole vBlock thing - and HP don't have a "me to" solution - the
question is are they going to be able to create something that
competitive or compelling (or are they chasing this at all)?

its arguable that whilst noise around vBlock continues - how
compelling is it - and where is that full "ball-to-the-walls" TCO that
you need to prove it.

If HP are going to enter this space also, they really have to have a
compelling model that gives you flexibility, supportability, a cost
model that you can understand and prove (added that in there for you
@ianhf 8-) but also stay "open" - people don't like lock-in!

Of course - this is all guessing around what HP are up to - lets watch
and see!

Have a great one - and of course (I have to say this) - COME ON
ENGLAND - Lets have a great first game today boys!!!

Saturday, 15 May 2010

Are Oracle about to eat a well deserved portion of humble pie??

Hey there - Howzit, hope all is good out there!

So it is with some interest that i have watched the recent announcements
of takeovers / mergers and acquisitions... Some notable ones being
gemstone being purchased by springsource (VMWare) for their caching
laying, Adaptec going to PMC-Sierra and of course the most notable,
Sybase being acquired by SAP for a rumored cool $6.1B

So the last of those, for obvious reasons has really caught my
attention... Major reasons in the press for this purchase is based
around mobility - and some of the cool tech that Sybase acquired a while
ago, however you have got to ask if this is going to cause Oracle some
more hurt...

I mean - lets face it, Oracle have been trying to dictate tech that
end-users will buy. Over the last year or so (after the Sun acquisition)
- you only need to mention some of the recent attempts of change of
product support for Solaris x86 to technology owners and can watch blood
boil, tears start, ranting begins and general mumblings like "if i had
my way it would be ripped out" to start... Enterprises have had a really
rough run with Oracle in the last few months...

If it hadnt been for the recent hold on the database market that Oracle
have and Sybase doing a really pants job of getting price point right it
may be a very different landscape now, with Sybase (before this
purchase) starting to make big inroads back into enterprises again.

There is a huge opportunity here for SAP to take a look at its portfolio
of product, look at synergies within the organisation (ie back office
costs) and use their selling power to get Sybase back on track and put
some real competitive scenarios in place between themselves and Oracle
and getting Sybase DB products back as the premier preferred product in
the enterprise and Oracle out..

Of course the other purchase that is of interest (and concern to
Oracle?) is that of VMWare's purchase of Gemstone (who make data caching
products) - This is a direct competitor of Coherence, and whilst they
(Gemstone) are not in as many accounts and as big as Coherence, arguably
this was due to size of company - now they have a monster of a firm
backing them up, a hugely viable company (VMWare) and a Storage Company
(EMC) who know a thing or two about storage...

Is it time that Oracle ate some humble pie, realise that customer choice
is once again going to decide and dictatorships may be over..

Cheers,

S.

Saturday, 1 May 2010

Confused.....Object storage, Bycast (Bygone?), NetApp, Atmos and NotOnTap

Howzit then??
So as of well, erm "some time ago" NetApp were well on their way to
putting together their object storage roadmap and was due for release in
the near-time... All fully integrated into NOnTap etc etc... Its all
going great..
Of course, NetApp have also been in acquisition mode for another product
with the bidding war that emerged on Data Domain, that clearly didn't
happen - but hey we knew something was on the cards..
What was announced last week - well, NetApp acquired Bycast (Bygone?) -
which is a great solution that does some funky stuff with "cloud
storage" and all this object based stuff that we need for cloud type
services....
Hold on... Object storage... Didn't i say that NetApp had it on their
roadmap some time ago... Oh yeah - i did... Right at the top... So what
is the deal here, were our NOnTap friends really on their way to
creating an object based storage solution - or were there a few porky
pies / being a little over optimistic with deliverables
Ok - anyways, so maybe we have two object based solutions / cloud type
storage or one - as long as we have something - then its all good... But
i have another worry here.... I am sure that all of us out there that
use and enjoy NOntap functionality remember NetApp purchasing a company
called Spinnaker, and some 5 years or so later, do we enjoy all of the
integrated functionality - maybe not... we are talking about clusters
possibly running in 7 or 8 mode... I think the date for NOnTap 8.1 is
still pretty fluid...
What is my faith in NetApp being able to integrate Bycast into NOnTap in
a timely manner - its pretty low to say the least... Would really love
to see some statements from NetApp around both roadmap (from a "we are
doing it already / object storage perspective) and also details on
integration timelines, along with release dates etc.
Whilst we are on the topic - EMC Atmos - so whats the deal here then???
Is it a globally dispersed CIFS / NFS presentation layer? Is it an
object store that can be global in nature, is it bird? Is it a plane? Is
it SOOPERSTORAGE??? Is it Cloud Storage??? In fact - i would love the
sales team that market it to give me a clear and concise view at to what
it is, what it is trying to achieve and what the roadmap plans are for
the product set.... Don't get me wrong - i think that this product has
its place.. (and i think i have found a use for it) - but i am not sure
i get the whole deal and how the product set is really gonna hang
together and
I guess what I am really trying to say is what is the vendor community
up to here, and other than calling this bunch of stuff cloud storage -
where is the coherent story that is dragging this lot together????
would love the various vendors to take us through this "journey" we need
to take and understand what fits where and does what, which cloud it is
and which product is playing in which space. - i am sort of guessing at
the moment!
Cheers,
Stu.

What sort of things would I like to see included in storage array federation??

Hi there all, so there have been a good few discussions going on between
a bunch of us, namely: @storageanarchy @3parfarley @ianhf @storagebod
@stuiesav (myself) and numerous others (apologies if i have missed
anyone) around the subject of storage federation...

Firstly - a big "Get fixed soon" to Marc Farley - Take a nose at his
Blog - really quite humbling to see what technology is capable of...
forget about all this storage and infrastructure malarkey - Keeping
people healthy - that is where it is at... Take a nose:
http://www.storagerap.com/2010/04/the-technology-im-trusting-today.html

Now - back to business - Just a little rant... honestly just a little
baby one.....I truly truly think it would be of great help to both the
industry (vendors and customers) if some firm definition of terms were
made, and in this example - why federation does not equal virtualisation
and virtualisation does not equal federation (I'm still struggling with
it whilst writing this article - please don't flame me ;-) I really
believe that the governance and standards bodies could really help here
by writing a dictionary of terms etc.

Right - back to the topic in hand - As well as twitter traffic there
have been a couple of good blog postings, one from @3parfarley which can
be found here:
http://www.storagerap.com/2010/04/zeroing-in-on-a-definition-for-federated-storage.html
another from @ianhf which can be found here:
http://www.grumpystorage.com/2010/04/feature-stacks-and-abuse-of-language.html
and finally some of my own thoughts on some of the hype etc that is
surrounding this topic which can be found here:
http://www.stuiesav.com/2010/04/emc-federated-storage-arrays-really.html

Views and thoughts have been shared and its been a good discussion. I
thought i might just add a few more views in terms of the type of
functions / behaviors that i would expect from a federated storage
infrastructure...

This really is not a hard spec of items, but really some rough
scribblings that i thought peeps may be interested... so here goes for a
bit of "rough":

Day 1 functions that i would like to see (again rough...):

* Allows non-intrusive life-cycle in and out of arrays
* Allows a re-balance of data across arrays
* Load balances (guess this couples with the above point)
* Scales (isn't limited by lun counts, array counts etc) as an
example, i dont want a federated layer becoming a bottleneck
* Storage array agnostic and can federate data spread across arrays
of disparate type
* Understands performance characteristics of arrays that are being
federated (look and listen approach)
* Ability not just to tier on disk within array but accross arrays
based on holistic performance of arrays and can place data into
different locations
* Is not tied to geometries / lun boundaries / like for like
configs, can move and translate
* Ability to off replication (not sure this is a federation function
- but rep between dislike arrays is attractive)
* Can spoof previous geometries of configurations to allow easier
migrations between host types / arrays etc
* Has a policy engine that can define what should be where, when
data needs to be evacuated / drained (in the event of system
lifecycle) and define where data should be right-placed.
* Any host able to map to any component within the federated array
configuration without the pain of additional lun masking, zoning
etc etc etc (i.e. help me with the provision nightmare that is
involved with moving data around)
* Federate over short distances where possible (i.e. arrays within
a DC complex or upto 80km spread)
* Move the smarts normally associated with the array (such as
snapshot, replication etc) upto the federation layer

I would like to put in day-1, the ability to federate to differernt
types of storage, i.e. file system, block, object stores, links into
various API's etc - but i guess this could be asking just a little too
much...

Day-2 functions (even more rough....)

* Ability to link into an Infrastructure federation layer that works
/ behaves at a global enterprise level
* Offer the concept of connectors between federation layer,
application, OS and array so that data can truely be moved to an
appropriate tier based on API, Application definition and policy -
a big ask, but as we move to these cloud based services - its
gonna need to be tackled - maybe there is a federation that needs
to happen between cloud services??
* Start offering concept of de-dupe (or maybe i mean
single-instance) across federated infrastructure - learn whats out
there and stop writing the same thing everywhere - I am guessing
the amount of metadata required to do this is huge - but it may be
less than the data set itself ;-) (hey - i never said this was
gonna be easy!)
* Global federation - based on centralised federation layer, tie in
with application definitions and location information - if an
application gets deployed in one region or many - the ability to
handle that, and right-place provision is key. This could play
such a huge part in time-to-market discussions - and the fact you
can just tell an infrastructure cloud to provision ready to accept
app would just rock!
* Ability to federate from a storage service that may be in house,
and movement / lifecycle out to a cloud service and vice versa...
This coupled with a policy engine would rock... (hmmm never used
that term before - whats that all about)

Right - that is where I am up to now... Will add to this over time -
However, I am on a train whizzing to Nottingham for a friends bulls
party (maybe might even have a beer or two)...

I am aware that i may be dipping out of storage federation and bumping
into infrastructure federation but hey.... there ya go!

Would be really great to hear other peoples views / add to list / tell
me i am mad as a box of frogs etc..

Have a great one!!

Stu.

Friday, 23 April 2010

Virtualisation DOES NOT EQUAL cloud...

i cant remember the number of times that vendors have pitched cloud
services to me in recent months, and virtualisation has been there
right by its side... Infrastructure as a service does not make
virtualisation a pre-requisite. Its about offering a service, having a
service model in place and being able to market it / get it out there
so that people can start subscribing to said service....

Why suddenly you MUST have a hypervisor included in that really
defeats me... What's wrong with solaris containers, or consolidated
database environment etc...

The other thing that makes my blood boil - those out there offering
unified platforms to offer these cloud solutions - make the management
services work across all those product offerings that truly make up
the cloud service that you would like to offer. Don't just stop at the
VMWare's of this world because its easy - start offering the ability
to automate the provision of solaris instances, containers etc....
Cmon - if we are gonna do this thing properly - lets do it, and not go
off all half arsed....

Please see @ianhf 's article here: http://www.grumpystorage.com/2010/02/cloud-everything-as-service-latest.html
- very much same views as my self.....

VIRTUALISATION DOES NOT EQUAL CLOUD and CLOUD DOES NOT EQUAL
VIRTUALISATION!!!!

Virus, Anti-Virus and a self fulfilling prophecy - Macfee

I wonder how many other people got caught out on the evening of the
21st April 2010??? Dodgy virus dat files distributed etc..

You invest millions of pounds in security and anti-virus products that
are meant to protect you - and the very product that is meant to
ensure that your expensive compute resources are protected is the same
piece of code that brings your organization to its knees (if not
caught quick enough).

It just strikes me as quite ironic that an anti-virus application
behaves and destroys in exactly the same way that a virus should /
would behave...

If i was also into conspiracy theories (which of course I am not....)
i guess that it also begs one other question... if the anti-virus
companies make money from viruses that released out into the wilds,
what is the best way to generate more revenue???

hey ho - another eventful day in the life of IT

Have a great one!

Friday, 2 April 2010

Federated Storage Arrays... Really? Shouldnt we just call it storage virtualisation?

I hate the term... Federated Storage Arrays... Lets get over the rebranding excersize its "storage virtualisation"... I really dont get why this rebrand has happened, or maybe what i really mean why bother with the effort to hide a FUBAR on previous product that just didnt cut it.. .. Is this about "we invented it here" syndrome, and we dont want to own up to the fact that we got it wrong the first time round??? I THINK SO!

I wish us end-users / consumers of technology would be given a bit of credit, and vendors realise we do actually have brains, we do understand how stuff could / will work and we can spot when a rebranding excersize has happened because of pants product.

So why is the re-brand happening??? My view- its just to try and make us forget that the previous product that tried to achieve said piece of funtionality was just out-and-out crap, and vendor didnt want to own up to the fact that the competition had approaches that worked and scaled...

If i come up with an idea and it goes wrong - generally i own up to it- and say "I got it wrong" - why can vendors not do the same thing, credit us with having some level of intelligence and stop crafting stories about how this new wondeful thing is a different and advanced approach / paradigm shift or what ever funky marketing term can be crafted...

Rubbish!

Tuesday, 30 March 2010

FC vs FCoE vs 10-Gig-E vs NAS Protocols

This is really just a discussion on which storage network protocol, why and when? So this is really just my view of the state of the world & what we may expect over the next few years from the storage protocol market...

So - There is a bunch of hype and FUD going on around storage protocols, network convergence and value add that network convergence may bring... Whilst there may ultimately be some decent end-game that we *could* aim for, I am not sure we are even close to working out (a) what game we are playing (b) which teams may be in the final and (c) are we aiming for the world cup or maybe (and more possibly) some amateur local league cup - with little to no win...

Some of the discussions ongoing are very reminiscent of the iSCSI vs. FC discussions that no doubt we all got involved in (my last nasty discussion on this particular subject was 2 years ago - but still bubbles to the surface occasionally...)

Who are the teams / technologies that "maybe" in play at the moment? There are the obvious two, FC and NFS, but new to the equation (or old if you take FUD into consideration) are FCoE and 10-Gig-E... I will also throw iSCSI in for good measure, if I don’t – some one else will (and before anyone pulls me up on it - yes i know that i have mentioned one protocol that runs on top of another (NFS / iSCSI / Ethernet etc).

First thing / bug-bear... DCE (or DCB or...) and 10gig-E They are NOT the same thing - peeps (and especially vendors) please stop using them interchangeably - when you do, it just shows that you DO NOT know your topic at all!!! I am sorry to those of you that think I am stating the "bloody obvious" - but it is a mistake that often.

Right – onto a bit of discussion then…..

-------------------------

-Fibre Channel-

-------------------------

You look at most of today’s enterprises and the chances are that most mission critical environments are likely to be using fibrechannel. Its proven, guarantees delivery (and not to throw away stuff when it becomes congested) and networks are well known by the storage / OS guys - as they are probably the people that put them together... You don’t encounter (too) much over-subscription in the network (Generally)... 

I think it is also fair to say - that on the whole - these networks have been pretty damn reliable with pretty much rock solid performance and availability.... I guess the old saying "if it aint broken then don’t fix it" applies here - but hey, its damn expensive and if the truth be told can be somewhat complex to manage.. Of course you also have the additional costs of HBA's and more cable etc that needs to go into the back of cabinetry in order to support connectivity .... And then there is storage vendors that support FC - they all do it, without fail - bear this in mind as you read through this article as it is relevant.

Finally - let me add the subject of maturity into this section - FC is understood... and i mean by everyone... anyone working in storage, either in the central support function or on the outskirts of it will understand it (and more importantly the demarcation of who supports it)

Typical use cases of FC in the enterprise - well to be honest, 90% of all storage (with exception of unstructured data) is provisioned in this manner, but if you wanted to name it - items where performance is guaranteed, high consolidation environments, mission critical environments etc...

-------------------------------------------------------------------------------------------------------

-DCE / FCoE (Data Centre Ethernet / Fibre Channel over Ethernet)-

--------------------------------------------------------------------------------------------------------

FCoE- so as the name suggests, fibrechannel over Ethernet... it obeys all the characteristics of fibrechannel and then throws them over a DCE network...ok first point ... is DCE "ordinary Ethernet".... Datacentre Bridging / Datacentre Ethernet and whatever the heck else vendors want to call it (dependant on which one of them thought they invented the technology and term) - its not the same as Ethernet that you and I have all come to know and love... its different... so bear this in mind if you choose to go down this route - DCE / DCB is another layered skill set that you need to learn & manage.

What else then, well - as I mentioned in the fibre channel section - you have spaghetti, in the back of cabs... fibre for FC, copper or fibre for Ethernet - it gets horrid if its not managed properly, and managing costs money - in comes DEC (and hence FCoE) and the promise of CNA's (Converged Network Adapters) - the idea being able to use singular PCI cards that supply both IP connectivity and storage sounds like a good one - and i guess it is? Really? Is it? Hmmm  - hold that thought - and ask yourself the question where do I use FCoE - Top of rack, or end-to-end through the network - guess its time to add more flavor with this offering then.... (not that any of this is bad)

There are a number of views as to how DCE / FCoE will emerge in the mainstream, the vendors will tell you "it is a journey" - (how i hate that term!) This will typically start as a converged top of rack switch offering, which then breaks out to the disparate networks (Ethernet and FC) - and then use those traditional networks to go to their end points (ethernet based services or storage FC targets respectively) - ultimately the aim is thatend-to-end FCoE for storage use.

DCE / FCoE / Converged network - Top of rack

-------------------------------------------------------------------

Most of the offerings today will talk about using CNA's providing traditional network and storage through a converged piece of cable to a top of rack or top of pod converged protocol switch and then break out to there seperate traffic types, i.e. FC goes out to a traditional fibre channel network and Ethernet breaks out to the corporate Ethernet function - so the promise is that you need less cable in the back of server cabs, complexity is reduced and more importantly cost... But also think about fault domains, traffic analysis within a model where storage and network is mixed, performance issues.

One other thought here - who manages the physical and logical entities of this top of rack switch. Hardware could be handled by either as long as both parties are aware of the criticality of loss of Ethernet or FC... I don’t know about you, but every time i have described the concept of SAN and the fact that guaranteed delivery is king to a networks guy - the question pops up - "why?" - Guess this is just due to diffent way that ethernet (and hence the application and protocol stack takes care of things) - Look at traditional Ethernet, it is designed to throw away packets when the network gets congested / suffers some type of hassle. FC - well its got to be delivery every time - its a fundamental difference in approach and thinking (and each of the respective teams sometimes find this hard to grasp - really just a consideration if thinking about converging teams that manage these technologies).

DCE / FCoE / Converged network - End to end

--------------------------------------------------------------------

The ultimate aim of this technology really is to provide a singular network that provides all network-based services. We are really talking about never breaking out to disparate networks, IP is held within the network, and storage plugs straight into the service also. This really should be where the dream becomes possible - but again it is full of issues that need to be sorted. Demarcation of role is one (who manages this network from both a physical and logical view). Tooling in terms of intricate fault-finding / performance management is not quite there yet. More importantly - ability to plug FCoE targets (i.e. storage arrays) directly into a converged network (i.e. native FCoE) is limited. Some of the vendors have an initial offering that is there, or thereabouts (NetApp spring to mind as the first example of this - and able to offer at an enterprise level).

I have touched on operating model in the above section and also within this section - who owns this, able to have administrative control, who completes items such as zoning etc creates an interesting question.... Ultimately - this technology is not just about convergence of network, but convergence of an infrastructure type service, and to be able to service this - typically the organization would need to move from a silo based support model to cross-technology... Any org change is difficult - bear this in mind when looking at this technology

Finally - it’s probably worth giving a little insight into where adoption of this technology within storage is taking place (or the rate of adoption) and what the market is saying. I attend a number of industry forums - in 2008, if you were asking people in these forums around their possible adoption timelines of FCoE, they would have said within the 2-3 year timeline.. (Funny that - not many people seem to have it!), at recent forums (last two that i have attended - back end of 2009 and Jan 2010) adoption is now leaning to the 5 year (from now) timeline. Why is this, well back in 2008 we were probably seeing the "hype cycle" - reality set in, and now we are seeing a more pragmatic approach. Of course there are other reasons why this is also prolonged - there are more and more options - the (disruptive) emergence of 10-Gig-E and use of file share protocols

----------------------------

-Ethernet options-

----------------------------

Hey - here is something i never thought i would be writing on a storage blog..

Me, Ethernet and storage have had a pretty rocky ride together...

I have had issues with the likes of iSCSI – not because the protocol specifically has many issues (it does have some… but its not all bad) but more the way that people have thrown it into a generic network used for other “stuff” and then wondered why it all went wrong…

Back in the day, if you wanted to use iSCSI in enterprise infrastructure the correct and right way to do it, would have been to throw in dedicated IP switches with dual resilience and then hey presto – but hey, doesn’t that just become an IP based SAN?? And as soon as you dedicated those director class switches to storage, they become just as expensive as putting in director class FC switches, but without all the lossless characteristics that FC gives you – so why bother…

Then there is CIFS and NFS… those nasty unstructured data protocols that we have whizzing around our networks that we love (or is that hate?)…

Whilst unstructured data itself is a right pain in the backside due to its unmanageable approach (and I am talking about the data itself) – its got to be said that the provision of storage to hosts that wish to use CIFS / NFS filers is uber-easy…

Do you have all that nasty zoning to go through?? NO!

Do you have all that LUN masking to go through??? NO!

Do you have to worry about provisioning LUN’s to certain target FC ports??? NO!

In most cases, you simply provision a file system and present it to the network – hey presto!!! Yes, you probably have to worry about some permissions, authentication and all that jazz, but that skill set is far more readily available than that of good quality storage admins.

But its not all a ray of sunlight over here in NFS and CIFS land… why not? Well two things I guess, these being performance characteristics of the arrays that support these protocols and the network that sits between host and storage array….

Lets do away with the network question first… 10-gig-E is here… Its rocked up and its larger the life… Its fast, its easy, people understand it – its just quicker. In fact it’s really quick…. This technology will now become the de-facto standard deployment model in DC’s over the next 2-3 years, which gives some interesting options. Yes – it still a shared network, but with the performance hike and ease of administration associated with CIFS and NFS – this protocol can simply no longer be ignored as a viable way of provision disk to servers. 1-gig-e was always a problem, as it just couldn’t cope with 50% of what you could throw at it…

Then there is array performance… so this is an interesting one… Some of the current leaders in the CIFS and NFS filer space have issues… the number of disk they can stripe across, performance of the overall array is still not up there with the traditional block array vendors, but now the customer has choices… We can look at start merging technologies together to get some real performance hikes…

NAS appliance “heads” married to a “wide stripe” array connected to a 10-gig-e network has real mileage, and that is without considering some of the other economies that can be considered in these configurations (examples of these being Compression, De-dupe etc…)

--------------------

- Summary -

--------------------

OK – enough of my ongoing dragged out rants on this blog, time for a summary and wrap up of where I think this really leaves us…

Fibre Channel

When guaranteed delivery and performance characteristics are required, Fibre Channel is going to be king for some time. It’s where you keep the crown jewels and all that mission critical stuff. This protocol is not going anywhere, and will be around for the foreseeable future

FCoE / DCE / DCB

So I was not sure to put this as a heading in its own right or within the fibre channel section… This protocol(s) still observe all the FC niceness – but just over a converged network. Whilst I do not believe that FCoE end-to-end will come mainstream in the “near-time” it will happen.

You will see FCoE (or other convergence) at the server side first to converge connectivity from host into top of rack / top of pod switch – and this will happen relatively soon (in the next 12 months or so)

My guess is that you will see this technology start to make an entry at top of rack in the next 1-2 years, and then end-to-end in the next 4-5 years. The promise of converged network cabling is just to hard to avoid, however I think the discussion of what ends up on traditional block storage and what use cases make a move to Ethernet storage (NFS) is a debate that will start raging

Traditional Network based protocols (CIFS / NFS / iSCSI etc)

As mentioned above – I think there will be a renewed emergence of these more traditional protocols but for more wide-spread use cases.

People don’t like complex provision technologies (i.e. those associated with FC as an example). With wide scale implementation of 10-Gig-Ethernet on the horizon there is an ideal play here – the use of performance orientated network-attached storage for less critical applications (those such as development, testing etc). Companies such as Atlantis Computing (with their ilio product) are making this more space even more appealing….

iSCSI – well still think its dead, to set it up properly – still needs a heap of infrastructure and there are security issues here (spoofing it IP etc). If you want to use NAS type services, then use them – don’t make it look like block  - as a protocol it does not handle disconnects that well, NAS / CIFS can! Rip block storage from under a host – it typically gets totally bent out of shape – a situation to avoid!

One more point to make – The storage array vendors really need to think about this!

The sacred ground of storage array vendors being able to just sell FC (or FCoE) arrays is coming to an end – ability to handle file protocols but also to have decent performance characteristics is going to be key – there is a market to be won or lost here– I believe that this is where you will see the more mature vendors have an approach Vs one-shot-wonder vendors (before anyone asks – I am not thinking about just wacking someone else’s appliance (or your own for that matter) as an answer – but a truly integrated array with end-to-end offerings)

With people wanting cloud services – the ability to provision easily is going to be key, and FC based protocols are not easy… People don’t want to faff with zoning and masking, nor will they want complex FC storage for all use cases – its expensive and people to provision them are expensive! Food for thought all you storage vendors out there!

--------------------------------------------

- Wrap Up / Final thoughts –

--------------------------------------------

FC - Where mission critical / performance / availability characteristics are required by applications – Stick with FC Protocol – This will continue to be the standard approach for some time to come! (maybe the emergence of FCoE / Convergence at the server host – but this will still be using FC protocol within the stack)

FCoE End-To-End will NOT happen any time soon as there is no compelling event to push this to happen – HOWEVER - you will see Convergence start happening at the server side (CNA's into top or rack switches or converged network ports directly on servers) – but not the complete way through the fabric (yet) market and product needs to mature… This will be at least a 3 year timeline before people adopt in anger (at least!)

NFS / CIFS
NFS or CIFS (or other typical NAS protocols for that matter) – Use for Non-Critical / low availability / Development / test environments Possibility to save a HEAP OF CASH.. Look at the option of using traditional NAS protocols (CIFS / NFS) over 10-Gig-E to drive costs down – I believe you will see this approach merge back into the production space as people become more trusting of 10-gig-e and performance and also with the smarts that the NAS offerings will bring to bear! – Start using them and save some dosh! This accounts for more than 50% of most peoples SAN Block storage array usage at the moment

Final note for Storage Array Vendors...
Storage Arrays / Storage Array Vendors
Follow this protocol market and carefully… The winners will come out with some level of true integration or smarts that will push for complete end-to-end solution (to provide FC  / FCoE  / NAS in one box) and enable all of the above use cases (protocols) as an option but with one back end array. Look for smarts such as de-dupe, compression, performance layouts as part of the commodity offering (along with the standard stuff we have today)

Thursday, 18 February 2010

SRDF / V-Max / DMX - why o why o why 73 code :-(

Hmmmpphh I have the hump....

why o why o why EMC - Did you make it so that I had to deploy 73 code to my DMX install base just so that I could use SRDF to a V-Max...

For a company that is so hell bent on maintaing compatibility / alledgedly ensuring that you can make things "easy for me"  - did you land me with this little gem...

RPQ hell is now ongoing (which affects EMC)
Remediation hell is now ongoing (which effects me...)

Its just pain all round - please - do make it easy for us!!!!!!! (and maybe we can just save some cash along the way!)

Wednesday, 10 February 2010

Worth a read....

So, I dont normally post links to other websites - however, this is fantastic.... @sunshinemug pointed me in the direction of this website - and if it does not have you in pieces, laughing your head off - then there is something wrong with you...

So what is it about - basically the ramblings of someone sleep talking... take a read and enjoy...

please visit
http://www.sleeptalkinman.blogspot.com/
and enjoy...

**warning - there is some colorful language 8-) **

Thursday, 28 January 2010

Foggy Computing

So... this term cloud computing is being bounced around, alliances springing up everywhere and apparently there is a paradigm shift to this new cloud services model... but you know what, i think it should be called fog (or foggy) computing


Why - well reasoning goes something like this:
  • There is a total lack of use cases, yep SaaS, IaaS and such terms have been bonded around but where is the detail
  • Management is being spoken about as if you can just wave your hands and suddenly you get a compute service delivered - but really, where are the overlaying / orchestration management tools or in fact a list of those which could be used
  • Technical ratification of all of the major players? oops- that’s not there either (believe it or not – oracle does account for something like 60% of my estate – so I would like to be able to use whatever I deploy in conjunction with this tool)
  • If I deploy this nice new shiny cloud along with tools – i now sit in the middle of even more tool sets rather than consolidation - I can’t get rid of my existing tool set as i still have to manage Sparc / AIX / HPUX resources and now i am getting additional toolsets thrown at me with no roadmap or view into how I am going to solve this.
  • Open API's / ways of integrating.. you know what XML is a great way of letting me "do stuff and manage" to a compute environment but i still need something to orchestrate this... i am starting to feel that open api's is a new way of the vendors saying that the orchestration bit is just just a little to hard and leave it to the end-user... This is a fine approach – but also suggest a way forward / management framework along with tools.
  • Overstated "easy to do" configs... A number of these have been presented at very large forums, an example being Virtual Hosted Desktops bounded around as a no-brainer... I personally have just been involved with / at the back end of a significant deployment - and it has been by far, one of the most difficult things i have done (if you want some details and guidance - please shout) – don’t buy this as being easy...
  • Alliance's, so i am not sure what is going on here... I have alliances, new partner agreements and all sorts of stuff spinning up – its coming out of my ears to be honest... If you don’t have an alliance on the way, apparently you are “behind the times....” – but what do they really mean??? Even basic interoperability sheets are yet to be fully populated, along with application & ISV support.
  • Go to market model... Is there really a go to market model, other than loads of salesmen trying to sell me bits of stuff that they don’t understand? In the worse case I have actually had someone come up to me, and say - do you want to buy a cloud... I mean... come on vendors - please sort this out...
  • And by the way - when you come and sell this stuff to me, I really don’t expect to tell you how your solution is being sold, how an alliance is being formed and what you can and cannot do with the infrastructure - surely that’s your job – (and before anyone asks - yes this did really happen!)
What is starting to arrive is some pretty smart compute platforms (and I use this in the broadest meaning of the term (inclusive of CPU / RAM / Network / storage etc) but at this time it is just a bunch of stuff that needs gluing together – and lots of people writing operating models around this kit...


Also – whilst it is sold as simplifying environments – this may be true for those doing the provision after infrastructure has been built, in truth – the underlying complexity and layers of abstraction is really quite scary... Performance fault finding as an example – has just become orders of magnitude harder...


So back to the title - why foggy computing - well, it’s sort of unclear, un-instrumented, undefined, unworkable and quite frankly- unreal at this time..


Cheers,


S.