Friday, September 2, 2011

The Thorny Challenge Of Elasticity

Listen to people describe their idealized cloud, and sometimes you'll hear them say "infinitely elastic".
Cringe Most IT infrastructure people cringe when they hear that sort of talk, and justifiably so.  At some point, all infrastructure becomes physical no matter how abstracted or virtualized you make it.
Not only that, being able to meaningfully grow and shrink infrastructure resources dynamically in response to changing application demand is a very difficult problem -- especially for general purpose use cases.
The application environment has to be able to request and release resources.  The supporting infrastructure has to work in concert with the application stack, of course.
And then there's the head-hurting topic of policy definition and management: under what circumstances should appplication "A" get more resources, and application "B", "C" and "D" get less?  
Slide1 How do you define what types of internal or external infrastructure are allowed candidates for supporting dynamic application expansion?  
And how do you communicate back to the business what real-time tradeoffs are being made, and showing them that everything is compliant?
I don't think anyone has complete answers for most use cases, but some interesting work in this area is starting to manifest itself.
In particular, the VCE crew is presenting their current progress on a solution dubbed "Application Lifecycle Manager", or ALP for short.  And it's an interesting discussion, to be sure.

The Real Action At VMworld Is The Technology Previews Most people who are new to VMworld tend to focus on the big announcements: products, alliances, services, etc.  While those are interesting, I am inevitably drawn in to all the technology previews you can see at the show -- it gives you a good sense of what will be productized -- usually by the next VMworld!
Chad_and_pat EMC is doing lots of these cool technology previews at the event.  One big dose will happen at Pat Gelsinger's Supersession (#SUP1006) Tuesday morning at 10:00 AM -- not to be missed.  Another one will be Chad's "kitchen sink" -- "Next Gen Storage and Backup For Your Cloud" (#SPO3977) Tuesday at 2:30.
Trust me, it wouldn't be a Chad show unless there were some off-the-hook technology previews :)
VCE is doing their fair share as well.  One of their sessions describes their work on using Vblocks to create new forms of automated elasiticity for larger VMware deployments.
And, despite their excellent progress, it gives you an appreciation of just how much work lies ahead ...
The Problem In A Nutshell
Larger applications are inherently dynamic in their resource requirements.  In addition to moment-to-moment swings in demand, there's also the lifecycle aspect: from development to test and ending with decommissioning.
Virtualized infrastructure resources (compute, memory, storage, network, etc.) are getting inherently more dynamic as well.  Marrying dynamic application resource usage with virtualized resources creates the tantalizing potential of a world that delivers better service levels while using far less infrastructure resource.
Traditional applications were designed for the physical world; they aren't designed to scale dynamically -- all we can do is give them static allocations of resources, and try to do a little hidden magic in the background (e.g. automatically tier storage, balance server workloads, etc.)
But the newer application environments (think SpringSource, vFabric, the new Data Director, etc.) *can* express their infrastructure requirements to virtualized infrastructure.
And that makes the potential for orchestrating application resource elasticity more than just a pipe dream.
Why VCE?
Slide3 One of the advantages of working on a Vblock is that it's a known, popular and standardized environment: APIs, infrastructure management with UIM, and so on.
I can't prove it, but I wouldn't be surprised if there were far more clouds running on Vblocks today than any of the alternatives.  A lot more.
That standardization property enables anyone working on a Vblock to have far more cycles to tackle the "complexity above" vs. the inherent complexity below.
The VCE engineering team is using this property to do all sorts of interesting productization and solution engineering.  Much of it is at the cutting edge of deployable enterprise cloud technology, and this effort is no exception.
Not only that, the VCE folks are deeply embedded in all sorts of real-world cloud scenarios with customers and partners these days -- an essential component for innovating new solutions to newer challenges.
Imagine A Use Case
Slide12 A good starting point is to consider a modern three-tier web application: an application layer (perhaps using tcServer), a transactional messaging layer or perhaps an in-memory low-latency data grid (perhaps using GemFire or RabbitMQ) and, of course, a database layer (maybe using the new vFabric Data Director).
One application, three pools of infrastructure resources to dynamically optimize.
Web applications are always a good example for this sort of discussion; not only are they inherently dynamic in their requirements, they tend to be built from newer components.
To make this web app "infrastructure elastic", you'd need some sort of mechanism to define the components (what they were, their initial resource allocations, their targeted performance parameters, etc.) but also the macro-application environment.
You'd want to not only expose the pool of available resources and some mechanisms for taking advantage of them, but also provide some guidance and constraints about exactly how and under what circumstances you'd initiate a resource expansion or contraction.
And understanding those broad requirements is helpful to digging in to some of the details around ALP.
Deconstructing the ALP
Slide33 The ALP is built on a few simple notions.
One is the concept of a "blueprint" which states desired infrastructure policy for both individual application components (maybe GemFire in this example), as well as macro application policies
(e.g. don't let transaction time get too slow!).
Blueprints are used to drive workflows, monitor results and evaluate remediation scenarios where desired state varies from actual state.
The "blueprint orchestrator" is responsible for doing most of the heavy lifting.  It can drive workflows through vCenter Orchestrator (for example), monitor application-level performance and associated resource consumption (using Hyperic in this example) and report back higher-level metrics, using the vCenter Service Manager in this example.
The interesting part for me will be the "remediator" block, shown above.
Slide22 Although this solution has a limited palette of responses today (e.g. clone application, expand cluster), it's not hard to imagine a greatly expanded repertoire of both remediation activities, as well as policy constraints on those activities.
Policy remediations could run the gamut from re-tiering storage service levels (using something like FAST) all the way through selectively bursting portions of candidate workloads to alternate resource pools (maybe using VPLEX?); either to directly provide more performance to the application in question, or perhaps to free up additional resources by relocating less-important workloads.
Policy constraints could include the usual resource parameters (available infrastructure, latency, etc.) or risk-avoidance concerns (regulatory compliance, minimum geographical separation, must use isolated redundant infrastructure, and so on).
Indeed, it's not hard to see that -- before long -- most of the "secret sauce" will move to this remediator component.  It's the one place where all the potential infrastructure resource responses will be exposed -- and key decisions made!
It's also the one place where all the policy constraints on resource reallocation will be captured.  And it's where the "smarts" will ultimately live for balancing and optimizing competing demands.
Are We There Yet?
Achieving some meaningful measure of automated and dynamic resource elasticity is one of the next "holy grails" for IT architects everywhere.  And, no, we're not there yet -- although we can see it from here.
Slide14 Part of the challenge will, of course, maturing the underlying technology integrations.  Good progress here, but more needs to be done.
However, don't let technology mask the real challenge here -- and that's coming up with agreed policies for allocating shared and pooled infrastructure.
Policy maturity always lags technology enablement by a great deal; and there's no reason to expect that this topic won't be any different in that regard.
Indeed, I'd full expect a closed-loop feedback process to eventually be used here: here's the policy we initially set, here's how well it did, and here's how we're going to tweak it going forward to do even better.  Lather, rinse, repeat.
It's nice to see the progress the VCE team has made with their current solution -- it's quite compelling in its own right.
But there's still a long road ahead to be travelled ...

By: Chuck Hollis

Thursday, September 1, 2011

Vblock FastPath/VDI -- Changing The Model

If you follow this blog, you know I'm an ardent and passionate fan of the Vblock concept.  I just can't help myself.
Accel For me, it represents acceleration.  
At a tactical level, that means getting results faster.
At an operational level, it's the ability to spend more time on the stuff that matters and less time on the stuff that doesn't.
And -- strategically -- it means accelerating IT's transition to an internal service provider.
If you've sat through the Vblock roadmap sessions, you'll notice that development is progressing nicely around two axes.  One important axis is "better Vblocks": faster, more efficient, more functionality, better security, better operational integration, and so forth.  I always am surprised at just how fast these folks are moving.
The other axis is becoming important as well: pointing the Vblock concept at well-established IT use cases, and creating sinlge-SKU "products" that bring the Vblock model (real product vs. reference architecture, speed of deployment, pre-integration, single support, etc.) to specific and popular use cases.
Such is the case with the new Vblock FastPath/VDI.  In a nutshell, it's a single product you can buy that does VDI -- at scale -- in a box.
The Essence Of The Vblock Controversy 
Slide1 We've created an entire generation of IT professionals who excel at hand-crafting bespoke and highly customized IT "solutions".
While that skill set is still useful in some situations, it's quickly giving way to standarization: standardized service catalogs, standardized operational models and standardized shared infrastructure.
The Vblock is the controversial poster child of this new model.  IT traditionalists tend to dislike it.  IT leaders who are highly motivated to move quickly to the new model love it.
It's really that simple.
Show me someone who's highly motivated to change the way they're doing things for the better, and they'll fully consider a Vblock proposition.  Conversely, with no motivation to change, there's little interest -- and sometimes, downright hostility.
Who said IT was boring?
The benefits and value propositions associated with a Vblock approach can be greatly magnified if they're applied to specific, repeatable and problematic use cases for IT organizations: especially ones where there's no question whatsoever of having to do things differently.
For example, doing VDI at scale :)
What Makes VDI Hard
Slide3 Having been in my fair share of senior-level VDI discussions with customers, it's frequently a big hairy multidimensional IT challenge.  
There's the struggle to come up with a meaningful and realistic ROI that reflects business realities, and not IT-centric ones.
Then there's the resource question -- desktop teams are pretty thin to begin with -- where will the "surge" skills come from to make the transition?
Digging deeper, there's a complete re-architecting of how desktop services are specified, delivered, monitored, secured, etc.  New technologies, new processes.  Yikes!
What if we -- as vendors -- could make some critical aspects of this problem dead-simple for IT organizations?
That's the goal of FastPath/VDI.  Technically speaking, the formal name of the product is "VBLOCK FASTPATH DESKTOP VIRTUALIZATION PLATFORM", but FastPath/VDI just rolls off the tongue :)
Choose Your Size
Slide8 Perhaps the most telling part of the new offer is the choice you need to make: what size?
The FastPath/VDI product comes in three sizes: 500, 1000 and 1500.  That's about the hardest decision you'll have to make.
Need more?
Either expand considerably in-place, use multiples, or use the standardized approach to build yourself a really big one.
Same ingredients and process, different scale.
In the box, there's just about everything you need.  
Slide6 There's the Vblock itself -- and all the required additional components from the parent companies: VMware, Cisco and EMC.  There are connection servers.  There is security software and management software.  And there's three years of support and maintenance.
Going farther, there's a pre-site configurator that asks all the key questions needed to build a quick-install config.  There are new wizards that automate final configuration and administration.  It's all there.
Other than going to an external service provider, there is nothing in the market that is faster and easier to deploy for VDI in the marketplace today.
How This Changes The Game For VDI Projects
Slide10 First, you're purchasing a known quantity with FastPath/VDI.
At the outset, you know what it does, how it will perform, how it operates, how it's supported, and -- most importantly -- what it will cost to acquire and operate it.
Compare that with the usual "home grown" IT project.  
You're usually not quite sure what it will do, not quite sure how it will perform, not quite sure how it will operate, a challenging support model, and -- most importantly -- an unclear notion of what the entire endeavor will end up costing.
Something that's well-defined has some important advantages.
For example, you can have an intelligent ROI discussion without making stuff up.  Here are *all* the costs, here are the benefits.  You can also commit to a project timeline that doesn't involve lighting candles at the altar ...
Second, you can now spend your IT resources on the parts of the project that require the really heavy lifting.  For example, migrating your users and making them happy.  Or getting comfortable with the new workflows and processes associated with provisioning, monitoring and managing desktop services using the new model.
Third, you can get to tangible results orders-of-magnitude faster than trying to assemble, integrate, deploy and support the solution yourself.  And -- at the end of the day -- that's what business people care about.
More Wizardly Goodness
Slide11 The operational wizards that the VCE team has created here are worth a detailed discussion -- after all, it's all about making the technology easier to use, isn't it?
For starters, there's a pre-site configuration survey tool that makes the actual installation and firing up of the Vblock possible in minutes vs. hours.
Not only does it speed the production process, it uncovers any, ahem, *interesting* external configuration challenges you might have before the equipment shows up, and not afterwards.
But there's more -- far more.
The VCE engineers have built best-practices wizards around four important functions: initialization, installation, deployment and reset/reclaim.  They found that those four key areas were responsible for a majority of the frustration and inefficiencies associated with getting a VDI environment up and running.
Slide13The initialization wizard has a lot to do: configuring the Vblock itself for use as an optimized VDI platform: the core elements of storage, compute and VMware products.  The internal and external aspects of the network are defined (again, using best practices), as well as AD, DNS and DHCP.
Many hours of work are accomplished in minutes, and -- more importantly -- done right the first time.
Slide14 The installation wizard is largely responsible for configuring the VMware View components, as well as the connection broker.
Again, the same sort of benefits: done in minutes, done using documented best practices, done right the first time.
Slide15 The deployment wizard creates the optimized storage layout for VDI, automaticalls sets up the "gold" master images for cloning, and tehn does the handoff to View Manager.
Again, the same identical benefits.
Slide16 Finally, the reset/reclaim function automates recycling of no-longer-needed VDI instances, introduces a new master image, or -- perhaps more frequently -- gets you back to an initialized state in case there's a problem with either a specific VDI configuration, or -- perhaps -- the Vblock configuration itself.
Not that I would ever have to reset a device to factory defaults :)
Performance?  Know What You're Getting.
Justifiably, many VDI projects are concerned with end-user performance.  If the new desktop experience is perceived as slower than a traditional approach, user adoption will be a long, hard slog indeed.
Slide20 The team has invested considerable effort in precisely characterizing performance for a number of end-user personas.  Your deployment should see exactly the same performance VCE has characterized unless there's a serious misconfiguration issue.
Need more?  Need less?  There are clear recommendations on how to turn the knobs to get exactly what you're looking for for each use case profile.
Security?  Baked In Vs. Bolted On
Slide23 It's not enough to bake in some anti-virus software into your gold image and call it a day anymore.
Part of the appeal I found in this VDI-in-a-box product was the extensive security functionality that had been fully integrated as part of the offering.
From security configuration to patch management to event logging, it's arguably best-in-class.  All fully integrated with both VMware and Vblocks.
All The Cool Tech
Slide19 Since VCE Vblocks are built on the very best from the parent companies, you'll see a healthy helping of the relevant state-of-the-art technologies from the industry leaders.
From storage to networks to security to virtualization to management to security -- every component is arguably at the top of its respective class.  Deep-dive technologists might want to argue one aspect or another, but -- from a big picture perspective -- it's probably what you'd want to build for yourself.
That is, if you had the time, money and inclination :)
And Let's Not Forget ...
Slide9 The VCE team has re-engineered the entire procurement and provisioning process to enable customers to go from purchase order to production in under 30 days.
That's very fast indeed, if you think about the alternatives.
There's single, seamless support from VCE.  Release updates are delivered as a single, tested and integrated whole vs. dribbled out from the respective vendors.  And, of course, there's one number to call when you need help.
Stepping Back A Bit
Argument It's one thing to argue the pros and cons of how best to design, build, integrate, operate and support virtualization at scale in a theoretical sense.
In many regards, that conversation has started and continues to this day.
It's another thing entirely to focus on a specific well-understood use case (like VDI) and compare and contrast the specific differences between the approaches.  The comparison couldn't be more stark from what I see.
I wonder what arguments the traditionalists will offer for the do-it-yourself approach this time?

By: Chuck Hollis