Thursday, September 29, 2011

Learning To Compete -- The Real Challenge Of ITaaS

Competition is the engine that drives us all forward.
At an individual level, we compete with others who might be able to do a better job, or do things for a lower cost.  At a corporate level, our companies compete each and every day for customers.
Organizations within companies have learned to compete as well.  If you deliver a service to the business, there are always external competitors who'd like to show your internal customers a better, faster, cheaper way to get things done.
I believe the necessity to compete for internal customers is the primary engine behind IT-as-a-service concepts.
Fail to successfully compete, and your internal customers will likely go outside to get what they need.
In this post, I'd like to share with the the most cogent explanation I've seen on this topic to date: what's happening, what it means to IT organizations everywhere, and -- most importantly -- what you can do about it.
Welcome To The New IT Department
The best way to get into this discussion is to take the perspective of the business person who'd like to get something important done.
Maybe it's a new requirement, maybe it's a new idea, maybe it's a new marketplace -- it doesn't matter.
As a business person at EMC, I often start presentations by introducing my new IT department -- it's my corporate credit card.  I can potentially consume a *lot* of IT externally without having to really answer to anyone if I choose.
These external providers offer attractive services, and let me select and consume what I need with a minimum of length approvals, negotiations, project requests, planning meetings, etc.  Bottom line: if I can find what I need externally, I win.  I can get where I'm going faster.
Now, in all fairness, there are policies and practices in place that are designed to keep me from doing unwise things, but I've got considerable lattitude nonetheless.  Especially if I can get something going before anyone finds out.
The truth behind this phenomenon is simple: we now have a generation of knowledge workers, managers and business leaders that are quite comfortable with technology.  They use it every day, and they know what's out there.  Personal information technology isn't the mystery it was ten or twenty years ago.
When I hear people use the phrase "consumerization of IT", it isn't about the technology getting dumber; it's about the IT consumers getting smarter -- and more demanding.
Where Will Your Business Source IT Requirements?
I know what many of the IT professionals reading this might be thinking: "well, we have policies in place to keep that sort of thing from happening".
I'm sure you do, but -- frankly -- I think you're underestimating the persistence of a motivated business worker, manager or leader.
Collectively, we're trained and encouraged to solve problems and find creative solutions -- and if that means going outside of enterprise IT, so be it.  Better to ask forgiveness than ask permission.
As a popular example, it's an infrequent case when Salesforce.com's CRM application is brought forward by the IT department as a "better solution".  Or Google Mail.  Or Dropbox.  Or iPads, for that matter :)
The bottom line?  People have more choices than ever -- and there's no turning back.
Enterprise IT Customers Are Looking Elsewhere
Need evidence?  There's plenty to be found -- if not in your immediate range, then certainly from any number of authoritative external sources.
Even though the numbers shown here are substantial, I think they're understated in several regards.
The more of a knowledge worker you are, the more likely you're using external IT services and devices to get your job done.  As business models shift rapidly in favor of these knowledge workers, you'll likely see even higher rates of unauthorized IT usage.
Besides, who would admit to a survey taker that they're doing something, err, unapproved?
The prognosis is clear: every year, you'll see more external services, better external services -- and better marketed external services.  There are untold billions being invested in external service provider models, and they're targeting the people who can use their services -- sometimes with the cooperation of IT, sometimes not.
Something to look forward to: just imagine what happens when all those corporate Apple users get their hands on iCloud ...
IT Organizations Must Now Learn To Compete
I often get asked by journalists and analysts as to what might be driving all this interest in IT-as-a-service.
It's simple: IT organizations now have to learn to compete for their internal customers, just like other internal corporate support functions.
Fail to learn to compete, and the future is not pretty.
For starters, IT can lose control -- if they haven't already.  Whether the "shadow IT" is showing up in desktops and closets, or at any number of external service providers, people inevitably find a way to get their jobs done.
Continue to lose control, and internal IT inevitably becomes less relevant to the people they serve.  Becoming less relevant inevitably results in budget cuts, staffing cuts, or often both.  That's how the world works.
If the organization still believes internal IT is important to the overall strategy, they'll invest in new IT leadership.  At various events, I often ask the question "how many of you have had a significant change in IT leadership in the last 24 months?".
Every time, about half the hands go up.
If, on the other hand, the business doesn't see IT having the potential to deliver any unique value, that's when cost-motivated outsourcing becomes popular.  Why are we doing this ourselves when others can do it cheaper -- and maybe better?
How IT Service Providers Compete
Perhaps the most useful example here is to look more closely at organizations that have to compete for IT customers each and every day -- and that's the current crop of enterprise IT service providers.
They have no monopoly.  They have to earn their customers' loyalty each and every day.
For starters, the mindset is usually different -- they partner with the people using their services.  They invest the time and cycles to really understand what their customers are trying to get done each and every day.  Their ultimate goal is a relationship, not a quick sale.
Around that relationship, they'll try and offer as many related services as possible.  This works for the service provider, because the costs associated with acquiring and maintaining a customer relationship are substantial -- best to sell those people as many related things as possible.  This also works for their customers, because -- all things being equal -- it's easier to work with fewer external providers who can offer more breadth of capability.
These same service providers will also invest in something that internal IT organizations almost never consider: a sales and marketing function.  Now, I'm not suggesting that every enterprise IT organization go hire a sales force with quota and a full-time marketing department, but there ought to be at least some semblance of those skills, behaviors and roles in place.
Part of the attractiveness of working with external providers is that they're trying to earn your business.  Part of the frustration that many business users of IT share with me is that they don't see this from their internal IT functions.
Needless to say, all pricing in any service provider model is completely transparent -- you know what you're paying for.  And, if you look in their data centers, you'll tend to find IT platforms and operations designed to deliver as many services as possible from a single "factory" -- with an emphasis on simplicity, efficiency -- and trust.
Being A Competitive Internal IT Service Provider Is Different
Having spent much time in both traditional enterprise IT environments and their service provider counterparts, there are some stark differences at a fundamental level.
For one thing, consumption of IT services is designed to be as easy as possible for the people using the services.  External IT service providers aren't in the business of rationing IT consumption as so many enterprise IT groups feel they have to do.
The operational model is also different: it's all about the services stack: which services, how popular, how reliable, how flexible, how efficient, etc.  To be absolutely clear, I'm not just talking about the services that the customer sees; I'm talking about services that are used by others in the IT operation to service customers.
Ultimately, the technology model is different: it's all about having the minimum number of platforms that can deliver the widest range of shared services.  Everything virtualized is only table stakes; there's a strong motivation to use the same platforms over and over again vs. creating new for each and every new service offered.
And There's More
Digging deeper, there are even more fundamental differences between competitive IT service providers and their enteprise IT equivalents.
As mentioned before, IT service providers invest in relationships with the people consuming their services.  Not a "well, we had a planning meeting" relationships, but taking the time to know what people are trying to get done -- and how the service provider can position themselves to be seen as part of the solution vs. part of the problem.
IT service providers tend to have have different relationships with their IT suppliers as well: they often bet big on a few key suppliers vs. compete each and every new requirement.  Remember, these folks are highly motivated to standardize their
platforms and operations; so the less diversity and tighter integration, the better.
Finally, the most intriguing observation is that their organizational models are very different than their enterprise IT counterparts.  The model is built around delivering attractive and efficient IT services vs. one-off large and highly customized IT projects.
Becoming A Competitive Internal IT Service Provider
Not everything directly translates between these two worlds.  But enough of the thinking *does* translate that the external IT service provider model is worthy of serious consideration as a starting point.
In this model, enterprise IT becomes a provider of attractive and competitive IT services.  There's complete transparency as to what the options are, and what they cost the consumer.
Some of those services might be provided with internal resources.  Others might be sourced externally.  And the mix between the two will most certainly change over time.
Internal customers now can turn to internal IT for the same reason they often go external: agility -- the ability to move faster and change direction more nimbly.
At the same time, internal IT has the mechanisms in place to maintain control and compliance -- and, in this model, those attributes are built into the internal services model vs. enforced separately.
IT invests in creating the absolute minimum number of platforms to deliver the maximum breadth of required services -- no more building one-offs for each and every new requirement in the enterprise.
Ultimately, internal IT organizations can play the one card that no external service provider can: they have the ability to understand more about their customers' requirements -- and the business they serve -- than most any external provider.
Put differently, transitioning to an internal IT service provider model helps IT focus on what they're *uniquely* qualified to do best.  And that attribute is the long-term secret to delivering value vs. being seen solely as a cost-center.
IT Transformation Enablers and Components
So, let's circle back and dig in a bit more with the key elements involved in a transition to an internal IT service provider model.
The first logical bucket to consider is the consumption model.
For starters, it's creating a layered self-service catalog that's relevant, attractive and easy to consume.  Keep in mind, most ITaaS services are often consumed by other parts of the IT organization or authorized non-IT users vs. ordinary end users.
Costs (actually prices) need to be associated with each service offered.  Indeed, getting to the true cost of delivering a service is often no easy task in itself.  Since costs and prices aren't the same thing, there's a second round of thinking on how best to price services to drive the consumption behavior you're looking for.
Finally, like any retailer (or restaurant!) you have to have enough capacity on hand to meet the immediate needs of your customers.  Making people wait excessively isn't good for business :)
A second logical bucket is the technology model.  
Lots to talk about here (especially if you're a technologist), but -- at a high level, we've got a cloud model for internal IT (e.g. private cloud), a cloud model for external services (e.g. public cloud), and the ultimate goal of creating a rationalized ballance between the two (e.g. hybrid cloud).
Beneath that is the need for IT to remain in control.  "Trust" in the context is more than just security or compliance -- it includes things like service levels delivered, availability and recoverability -- all the things that go into someone "trusting" a service and its provider.
The phrase used here ("converged infrastructure") really doesn't do the broader platform discussion any justice.  We're talking about biasing towards a dynamically shared pool of resources vs. building little individual technology puddles for each and every requirement.  The "converged" descriptor does do a better job of characterizing the operational models used here.
And, finally, the third logical bucket is the IT organization itself: the skills portfolio, and how that's organized into roles and associated processes.
More On Consumption Models
Going even deeper, there's more that can be said about consumption models.
For starters, the big "aha" over the last year is that the primary consumer (and beneficiary) of IT service catalogs is -- well -- IT itself.
Example: many IT projects require some form of infrastructure: some transient, some permanent.  Why not start by making your infrastructure-as-a-service cater to those internal-to-IT needs?  Same for platform as a service, etc.
Even if your interface to non-IT users is more traditional or consultative, those new requirements can be met far faster and far more efficienctly if they're built on top of an agile catalog of IT services, ready to consume.
Many of us think that the real goal of showback and chargeback isn't so much about covering costs; it's about helping individuals, groups and the overall organization to make intelligent choices around IT consumption.  That's one area where there's a clear departure between external IT service providers and their aspiring internal IT equivalents.
Just-in-time capacity isn't solely about being able to serve your internal customers' needs more quickly; it's about pooling and sharing all sorts of resources (equipment, software licenses, skills, etc.) and standardizing the ingredients.
I've seen both service providers and the more progressive internal IT equivalents get very smart on sourcing technology when they can lock in on fewer moving pieces that are broadly used.
New ITaaS Models Based On Cloud
Since we at EMC seem to spend a lot of time on infrastructure, we often fall into the trap of characterizing "cloud stacks" as mostly infrastructure and the software required to manage, orchestrate and secure them.
Taking a slightly broader perspective, there's much more in play here.
There's the need to "cloudify" large and complex enterprise applications that won't be rewritten anytime soon.  That means being able to containerize them and enable them to run more efficiently (whether internally or externally), and is an important discussion in its own right -- and we'll be talking more about the VCUBE approach to this starting next year.
In parallel, new applications are being created all the time.  Indeed, VMware's broader strategy here around Spring and CloudFoundry represents a new model for creating enterprise-class, cloud-friendly applications from the outset.
And, every day, there are ever more pure SaaS offerings that meet the needs of the business better than any internal effort could hope to achieve.  Figuring out how to evolve the entire application portfolio forward in an IT-as-a-Service context is no small feat in itself.
I think it's no surprise to anyone that we're witnessing the early days of a revolution on the client side of information access.  The familiar desktop paradigm is now joined by finger-friendly applications running on various mobile devices.
Simply re-creating a full-screen web or desktop experience isn't exactly what people want going forward.  Going forward, IT will be responsible for the experience, and not necessarily the device.
From an enabling technology perspective, EMC has focused on three areas that we think will be key in this world to extend what virtualization is already doing.
The first is creating "trust" with various cloud models -- more than just security, it's being able to assure service delivery regardless of the circumstances.  The second is information mobility -- in this
world, information will be moving around a lot, and ordinary replication or data transfer mechanisms won't fill the bill.
And the third is orchestration and automation -- the ability to manage processes and outcomes vs. individual tasks.
Philisophically, we strongly believe these capabilities should be built into the services themselves, and transparently available for any and all use cases that come along.
New Organizational Models -- From Silos To Services
If you're a regular reader of this blog, I've talked about this at length -- and here is another representation of the same ideas, perhaps with better formatting :)
The core idea here is simple: if you're going to be in the business of delivering attractive and competitive services, you're going to have to organize around that principle.
In this model, there's less emphasis on the specific technical disciplines, and more emphasis in creating and delivering services.
The model here shows four: infrastructure, app platform (database plus other components), application and user experience.  Each service category typically consumes services from beneath it, e.g. user experience service consumes app
services, app services consume app platform services, and everything consumes infrastructure.
By "platform", the intention is to represent "the platform that delivers those unique services", e.g. app support environment platform, specific application platforms, specific user experience platform and so on.
And infrastructure, of course ...
The emphasis on "IT service management" and "IT services marketing" is intentional.  We've chosen to over-emphasize the marketing aspect for one simple reason: it's almost always missing from IT organizational DNA, and that lack ends up hurting badly in this model.
Focus On The Business
If we dig in deeper to what goes on here in these two disciplines, you'll see familiar activities, but completely oriented around what people actually want, vs. what IT decides to build.
I call this shift in mindset the "retail gene".
You'll find it all around you -- except in many IT organizations.  What do people want and need?  How do they want to consume it?  How do we productize it?  How do we market it?   How do we measure it?  How do we support and manage it?
Just like any MBA studying a market :)
Indeed, I'd argue that the required skills here aren't all that hard to find -- once you step out of the rather cloistered world of IT professionals.
The EMC IT Example
As many of you know, we often use our own IT organization's transformation as an illustrative example of what's involved.  Sure, every situation is different, but we've seen enough similarities between our own experiences and those of our customers that we think the patterns and models here are very useful.
One of the segments that customers usually find quite interesting is the detail behind how our infrastructure organizational model shifted dramatically between 2008 and the present-state 2011.  Had we known what we do now, we could have gotten there far faster and with a lot less churn -- one of the reasons we're highly motivated to share our experiences and make it easier for others.
The starting point was the traditional technology silos that we're all familiar with.  Progressively, the organization morphed into a services-oriented structure where the definition, promotion and delivery of services now is the organizing principle.
The same model is now being progressively applied to app platforms, enterprise applications, as well as user experiences.
Quantifying The Benefits
Frequently, I get asked -- how do you measure success when considering a transformation like this?
I think it's a fair question -- even if you "get it", there's the unenviable task of having to convince many others.
Although EMC offers popular consulting services to help make the case for a transformation investment (and quantify resulting benefits), it's often useful to share our own internal EMC IT experiences.
The chart here graphically shows our progress, using "degree of virtualization" as a proxy for how much we've transitioned to an IT-as-a-service model.  Yes, the "percent virtualized" metric isn't perfect in several regards, but it is widely comparable across organizations.
Right now, we're stating that we're around 80 to 85% virtualized across our landscape.  Some smaller portion of that runs in the IT-as-a-service model, but the gap between "virtualized" and run as-a-service is closing fast.
We can do a good job of quantifying the benefits at each stage of the journey (both "hard" and "soft" metrics), but -- here's the important point -- the ideal measurements change from stage to stage.
In the first phase where you're simply virtualizing non-critical IT-owned applications, it's all about cost optimization: spending less on servers, storage, etc. and the people who have to administer them.  Most everyone understands what simple virtualization can do in this regard.
In the second phase, it's more about delivering better IT services under the traditional guise but using virtualization as a foundation.  By better IT, I mean "fewer outages" and "better data protection" and "more consistent IT compliance" and "reduced operational effort".
As well as saving some serious money :)
In the third phase, it's really all about being more responsive to the business -- agility -- as well as being able to deliver better IT services (more robust, more secure, more flexible) and, of course, even more cost savings.
One Of The More Interesting Charts
About a month ago, the EMC IT folks started using this chart, showing how -- as agility increased -- the IT spend dramatically shifted from "keeping the lights on" to "investing in new capabilities".
This outcome shouldn't be surprising -- as the back end of IT gets more nimble and efficient at delivering easy-to-consume and flexible services, there's more time, energy and resources in doing the new stuff.  And those new and innovative capabilities are always big consumers of the service catalog -- as you'd expect.
As an exercise for the reader, where would you put your current organization on time to provision a developer-ready or user-ready application environment?  Not just the server -- everything you need for it to be directly usable for the intended
purpose.
Or, in terms of how much of the IT spend goes to doing new initiatives vs. feeding as-is operations?
Just knowing what the answers might be is a useful exercise.
EMC IT's Cloud 9 Example
Here's just one of the internal services being offered by our internal IT team, dubbed "Cloud 9".  Note: a little marketing can go a long way :)
It's self-service infrastructure for anyone who wants it, typically for transient requirements of 90 days or less.
Longer use and/or more demanding requirements initiates a separate process, but if you can stay within the confines of what the service offers, you get what you need -- no questions asked.
The biggest audience?  Internal software developers -- both within EMC IT and our various product development communities.
What you can't see from this slide is what's behind it: a small team of people talking to the users of the service, figuring out how they're using it, discovering unmet needs, trying to improve the service, etc.
Put differently, this wasn't your typical hit-and-run IT project; it's part of a sustaining organizational function to deliver internal IT services that people want to consume.
And that makes all the difference in the end.
Being Public About Inhibitors
Change isn't easy, and we're doing everything we can to share not only our internal challenges, but the challenges we hear from our customers who are doing the same.  Here's a summarized list.
On the technology front, no real surprises here.
Building and running large, shared infrastructures that deliver a variety of different services is very different than the one-application, one-infrastructure approach we're all so familiar with.
It's not that it's intrinsically hard; it's just different.
As you increase the degree of automation, the people responsible for the outcomes get progressively more nervous.  You need to allow time for comfort and experience to develop around a certain degree of automation before progressing to the next level.
Again, it's really human nature complemented by practical experience.
On the operations front, there's a lot to address.  You're basically building a new function that perhaps didn't exist before.  There are new management processes, new life-cycle processes, and all the skills, roles and org structures that go with it.
The challenge here seems to be directly proportional to the size of the IT operation, with smaller IT organizations having a decided advantage.
The real heavy lifting happens at the interface between IT and the business.  There's the need to establish a service-oriented culture when non-IT people interact with IT people.  There's the need to teach businesses to consume off the service catalog vs. buying stuff and handing it over to IT to run.
And finance has to get involved to not only support the shared services model, but also take responsibility for managing overall levels of IT spend -- in this model, IT can't be doing that and be expected to be successful.
Advice On Getting Started
Again, every situation is different, but we've seen enough examples where some useful patterns are starting to emerge.
First, there needs to be a clear case made for investing in a transformation.
Going from one style of IT to another isn't simple, easy or cheap.  The case needs to be made to executive management, business unit leaders, and the IT organization itself.
More than a few IT leaders have launched themselves on a transformational journey without taking the time for this important step.  As a result, they've had to stop, go back, and retrace their path before proceeding again.  Or, they've misjudged their readiness to move ahead -- and things end up taking much longer than they should.  Either way, a key step in the journey for most.
A second key step is finding business champions outside of IT -- people with a vested stake in consuming the services to be delivered.  Not so much as a source of funding, but as your first "target customers" who can validate and consume what you intend to deliver.
A third key step is to break off a small team from the mainstream IT organization, and empower them to move ahead quickly without having to drag the legacy behind them.  As they mature their capabilities and processes, they can progressively handle more and more of the IT workloads using the new ITaaS model.
A fourth key step is to design the first round of services for other IT groups who can benefit from attractice and easy to consume services.  Almost every time, this turns out to be infrastructure services for application development and testing.  This "do it inside of IT first" gets to a quick and visible win without having to change the relationship between IT and its users at the outset.
Finally, be very open and honest about the inhibitors along the way.  Change isn't easy, and being absolutely transparent about the challenges ahead makes them a bit less, well, challenging.  That's worked for us internally here at EMC; we'd highly recommend it for others as well.
How EMC Can Help
Since we've been working on this for a while, we've amassed a considerable portfolio of expertise, solutions and foundational technologies that can greatly accelerate our customers' journey to a competitive ITaaS model.
We can't cover everything we and our partners do here, so consider this a sampling of highlights.
In addition to the "making the case" and "readiness assessment" consulting work described above, we can dive down deeper into the next layer of tasks.  We can help IT organizations construct their first service catalogs, do the consumption modeling, and help get a handle on the financial model as well.
We also have a healthy and growing roster of compatible service providers, which play an important role.  Not only can you benefit from understanding from what they're doing (example: learn from their rate cards!), they're an important consumption option that can help accelerate the transition.
More than a few of our customers have started their journey with rented infrastructure and capabilities from a compatible service provider, enabling a quick win to the business while building the case for a more substantial internal investment.  They can focus on making the front-end of IT-as-a-service work without having to invest in the back end up front.  When it's time, the move is easy.  Or the workloads can stay where they are if you're happy :)
When it comes to operations, there's a lot to be done, and we're prepared to help.
We can identify the key skills and roles you're going to need, and various options for sourcing the talent.  We can define the operational process you'll need, how you'll get there, and how you'll be continually improving them.  We can show you organizational models that work, and help you plan on how to make them work in your own organization.
Finally, there's technology to consider.
From Vblocks to data protection to security to our EMC Proven Solutions; the technology you'll need is there, and it's relatively easy to deploy and operate -- whether you do it yourself, or use our implementation and migration services.
Final Thoughts
Going from a world where you don't have to compete -- to one where you do have to compete -- can be jarring to everyone involved.  Whether you're an IT specialist, IT manager or IT leader, it's a completely different game when your capabilities are rigorously stacked up against those of others.
Part of the inherent challenge is simply recognizing what's happening, and starting to do something about it.  There are literally hundreds of external IT service providers that compete for enterprise IT business each and every day, and are successful at it.  And there's no reason why an internal IT function couldn't do the same -- but only better, due to the natural "home field" advantage.
IT leadership, in particular, is finding themselves especially challenged.  The model for IT success is shifting rapidly -- it's now about delivering valuable IT services that the business wants and needs, doing so at a reasonable cost, while maintaining the control that the organization demands.
The good news is that we now have plenty of examples of IT organizations that made the change, and are starting to see the benefits.  It's working for them.  And those that have made the decision to invest in transformation are justifiably proud of what they and their teams have achieved.
And, make no mistake, we're proud at EMC to be a part of their success.
Who said IT was boring?

By: Chuck Hollis


Wednesday, September 28, 2011

Cloud, Big Data … and Healthcare

Healthcare is a fascinating industry. 
From the drug companies and the insurance companies to the hospitals and clinics -- it's an enormously complex and fast-moving value chain that has a single goal: to help each and every one of us live better and longer lives.

If one were to ask the question "in what industry do you see the fastest adoption of cloud and big data", the answer -- at least for me -- would be healthcare.


A Unique American Perspective?

Unlike most parts of the world, healthcare in the United States is clearly a business.  Sure, the federal and state governments subsidize certain aspects, but every participant is competitively motivated to deliver ever-better services to more people at ever-lower costs.

As I travel, I haven't seen the degree of competitiveness driving innovation and rapid technology adoption across healthcare as I do in the United States.
Not even close, if I'm being honest.

Yes, our US healthcare system has its various and sundry challenges, but one redeeming quality appears to be the rapid pace of evolution and improvement.  There's a heckuva lot of money on the table (not to mention lives at stake), and there's a deep roster of motivated players with plenty of resources to invest.

So, let's start on our armchair tour of how cloud and big data are combining to revolutionize healthcare, especially in the United States ...

The Supply Side -- The Drug Companies

Clouds and big data are really nothing new to the companies that research and manufacture drugs, therapies and devices. 

Some of the earliest and most productive clouds can be found at drug companies doing all forms of research.
The underlying science is demanding ever-more elastic compute, served up in a convenient, self-service fashion. 

Drug effectiveness in the field is essentially a big data application, especially when correlated with other health and treatment information, demographics and more.  The more data from more sources, the better the insight. 

And, as a result, the underlying data repositories now appear to be growing at hyper-speed.

Perhaps the most extreme example of this are the newer genomics companies.  Yes, an enormous amount of data is generated by sequencing genomes, but that specific data is of limited use unless it's correlated with actual life histories, therapeutic results and more. 

The underlying formula here appears to be (big data) * (big data) * (big data) -- or more.

Indeed, within these organizations, you'll sometimes find two distinct IT functions -- one pointed at the more traditional back office, desktop, collaboration sorts of things, and a second one that's directly aligned with the researchers.

More and more of these research functions are already pursuing a hybrid cloud strategy: invest in on-premises infrastructure for the more predictable infrastructure needs, and use external cloud providers when the big stuff comes along. (insert link here)

The prognosis?  More cloud, more big data -- a lot more.  Why?  It's core to their business model -- finding new drugs, therapies and treatments that do more and cost less.

The Payers -- Insurance Companies

By comparison, the healthcare insurance companies appear to be on the precipice of being completely transformed by both cloud and big data.  There's a lot going on in their world.  Old IT thinking is quickly giving way to new IT thinking, and it's rather cool to watch -- from a safe distance, that is.

For starters, many of then are moving from a B2B model (selling healthcare coverage through employers) to a B2C model where they have more of a direct relationship with their ultimate customers.  Web portals, mobile apps, self-service tools -- it's a new world for IT.

Insurance companies want to know as much as possible about their customers -- not only their health histories, but their demographics, lifestyle choices and more.  Not only does this help them coach their clients to make healthy choices, it helps them price coverage accurately.

They also care a lot about the costs and outcomes about the delivery sides -- what treatments work best in which circumstances, which hospitals and doctors are most efficient (or not) and more.  It's no mistake they should be highly motivated to collect as many sources of information as possible, and invest heavily to glean continual nuggets of insight and understanding.

Although there are a few promising efforts, I think insurance companies will need to either continually invest in improving their big data analytics capabilities, or perhaps be acquired by those that do.  Better understanding of massive and diverse data will be a key lever in all of their business models.

Not only are many healthcare insurance companies actively building internal private cloud models to gain efficiency and agility (I know, I meet with them regularly), but there's a specialized service provider market emerging as well -- our good partner CareCore comes to mind.  It's not hard to imagine a complex ecosystem of service providers forming to support different aspects of healthcare insurer's needs.

The prediction?  A very different industry and very different IT landscape within a few short years.  Why?  Both cloud and big data appear absolutely core to healthcare insurers' business model, and those that make the investment sooner should gain a not-inconsiderable competitive advantage.

The Delivery Side -- Doctors, Nurses, Hospitals and Clinics

The IT you find in these environments appears to be moving quickly to a hybrid cloud model.  Skilled healthcare workers are inevitably in short supply, and -- not surprisingly -- they're also very demanding of their IT environments.

As a result, IT organizations in these settings tend to be incredibly agile and responsive to their users' needs.  You don't have to lecture them on the whole aligned-with-the-business thing.  They get it. 

Fortunately, many hospitals and clinics were early adopters of VMware, so it shouldn't be a surprise that they're now moving to fully-virtualized, pooled-resource models ahead of other industries.

It's easy to understand why -- these organizations are under enormous cost pressures, and they want to spend every available dollar investing in delivering better healthcare vs. investing in IT resources.

A few larger and more progressive hospitals are now moving to becoming specialized IT service providers for the smaller players in their region, using their IT capabilities to allow healthcare delivery organizations of all sizes access to world-class IT.

The big data side is where there's the most potential.  Not only do healthcare delivery organizations generate an enormous amount of potential raw input to analytical engines, they're also at the point where predictive analytics can do the most good -- when the patient is being treated.

It's unlikely that most healthcare delivery organizations will be able to invest in large-scale analytics capabilities and their associated information bases -- even though they often do their own research -- but they can easily consume those provided by others, such as the insurance companies and drug research companies.

Indeed, there's the glimmering potential of someday having large-scale healthcare analytics capabilities that ingest data from the point of capture and in turn provide real-time predictive capabilities on specific courses of treatment.

Not to mention, perhaps generating some interesting revenue streams from the data they can sell to both drug companies and insurance companies :)

A Few Final Points

Yes, there are good arguments as to why healthcare shouldn't be driven by the profit motive, but -  in doing so -- amazing and transformative forms of healthcare are now being created that will ultimately touch all of our lives.  Capitalism does have its positives ...

The more compute that's easily and dynamically accessible, the better.  The more data that can be assembled, harnessed and correlated, the better.  If it's linked and aggregated across organizational, industry and geographical boundaries, so much the better. 

More data, more insight, more value -- it's the fundamental equation of big data analytics.

Luke Lonegan of Greenplum recently coined an interesting phrase to describe the key technology thinking here -- "computable storage" -- where analytics and data can be freely combined, integrated and scaled at maximum performance and minimum cost.

No, technology itself doesn't appear to be a key inhibitor, nor in reality the costs associated -- especially in light of the rewards.  And, yes, key talent is scarce, but that will come in time.

No, it appears that the fundamental barrier to the bright new world is staring right at us in the mirror. 

It's ourselves -- and our natural need for privacy and confidentiality, as expressed in various regulations, customs and practices. 

Our innate desire for data privacy must ultimately be balanced against that exact same rich data being a key raw ingredient that -- when aggregated -- may lead to a greater good for all.
I believe that the era of big data and cloud transforming healthcare -- in all its aspects -- will ultimately be gated by this  fundamental change in our collective perspectives.

Would I be comfortable sharing some of my most personal and sensitive information if it someday leads to helping many other people? 

Years ago, people donated their bodies to science.  Tomorrow, we may be asked to donate our data to science.

That's something we're all going to have to think long and hard about ...

By: Chuck Hollis


Tuesday, September 27, 2011

A Big Effort To Support Big Data

Industry opinions around the topic of big data analytics range from wild-eyed enthusiasm to hardened cynicism.
My personal take?
The cynics should move on and find something better to grumble about; there's absolutely stunning potential being driven by a perfect storm of exploding data sources, nose-diving infrastructure costs and new toolsets to make data dance and sing in ways we haven't seriously considered before.
As of late, one of these toolsets -- Hadoop -- has enjoyed more than its fair share of attention.
One of its primary strengths is supporting efficient batch processing of enormous unstructured data sets.
Yes, you read that right -- batch processing is sexy again :)
The core technologies are thankfully open sourced, with the Apache Hadoop project being the primary core of the effort.  And yesterday, EMC and Greenplum announced a massive donation to the cause.
In A Nutshell
Imagine gigantic data sets coming from everywhere: web servers, social feeds, metering, etc.  There's a first level ingest/filter/correlate step to extracting value that's roughly analogous to a mining process -- raw ore in; useful minerals out.  Maybe that's why they call it data mining?
Big, scale-out commodity infrastructure is demanded -- the bigger/cheaper/faster, the better.  You need a set of tools on top of the plumbing to manage the data sets, schedule jobs and workflow, etc.  That's where Hadoop comes in.
Hadoop's roots are a story unto itself; the present state of play is a core open source project (Apache) and several derivative variants that are being commercialized by various vendors, including EMC's Greenplum division via the Greenplum HD offering.
When EMC announced their intentions to offer an enterprise version of Hadoop, there was the predictable concern about EMC's ability to give back to the open source community that not only created it, but was the source of major technological evolution going forward.
Well, I think we found an important and useful way to give back.
The Greenplum Analytics Workbench
One of the great aspects of the open source model is you get the best-of-the-best intellectual contributions from key stakeholders who are actually using the technology.  Some of the best code on the planet arises from open source models.
One of the downfalls of the open source model is that there's not a lot of money around to fund expensive stuff, like massive computing infrastructure.
When it comes to open source big data efforts, that's a special problem: unless the code is tested at reasonable scale, it's a work unfinished -- and less-than-useful to people who want to use it in large-scale production environments.
So EMC and Greenplum are leading an effort -- along with a great list of other vendor participants -- to create a 1000-node, 24 petabyte lab on behalf of the Apache project.  They couldn't afford a scale-out test environment, so we're building one for them.  And donating the equipment, facility costs and supporting labor.  That's not an inconsequential investment.
It should be up and running this January.  1000 physical nodes can easily become 10,000 or more logical nodes (thanks to VMware!), which allows some serious scaling of compute, network and data.  The team can find the problems that only happen at scale *before* it gets into the distribution.
That -- in effect -- greatly accelerates the maturation of the Hadoop code in a significant and meaningful way that can't readily be achieved by other means.  There's just no substitute for a big lab full of equipment :)
If you bother to read the quotes from the press release, you can almost feel the enthusiasm from the team.  My inner geek can relate.
My personal hope is that we can do more of this: there's an entire cadre of data scientists and data engineers that need to learn the skills to wrangle data sets at massive scale.  I can imagine us teaming up with educational institutions at some point to do exactly that.
And On To The Product News
Greenplum is essentially a software company.  Part of their compelling "secret sauce" is a modern database that is the essence of shared-nothing scale-out architecture.
Want to go faster?  Just rack up more commodity hardware, and you're off to the races.  Nothing could be simpler -- or more efficient.  At the end of the day, scale-out wins when it comes to big data.
Since being acquired by EMC, their software stack has moved beyond the initial GP database to include their enterprise-grade Hadoop distribution (Greenplum HD) which acts as a front-end for data loading and first-level grinding, and Greenplum Chorus which provides the "front end" portal for driving workflows and collaboration in the environment.
Of particular interest is the Greenplum DCA -- data computing appliance.  Yes, it's nothing more than an optimized set of commercial technologies (servers, storage, interconnect, etc.) but it's pre-configured, pre-tested and supported as a whole using EMC's enterprise support model.
I know, many of you reading this would love the idea of having the opportunity to design, assemble and support your own creation, but a lot of folks that just isn't an attractive option.  They want to use the technology, and not invest in hand-crafting it.
The important announcement here was around a more-unified DCA.  In addition to the original modules that support the Greenplum database (available in both high-capacity and high-performance configurations), there are now Greenplum HD modules to support those workloads, and an interesting new Data Integration Accelerator module that supports a variety of third-party analytics tools from the community and our ISV partners.
Customers can add various modules as their needs changes, and as the underlying technologies go through their predictable tick-tock of performance increases, price decreases and expanding capacities.
In essence, the Greenplum DCA has now become a single infrastructure that can support the big data analytics process: from raw information ingestion to advanced analytics built on industry-standard hardware and supported by a single vendor.
And I'm guessing it's going to be rather popular :)
Big Data Analytics And Core Business Processes
At the recent EMEA Analysts Summit, Jeetu Patel ran a fascinating session on how the insights gained from big data analytics were causing many enterprises to re-think how they built their core business processes, and how Documentum's new xCP environment was playing a key role.
The classic example is loan scoring.  Traditionally, that might have involved such things as credit history, income, employment status and so on.
But when that is complemented by analytics that include house pricing predictions for the local market, the local employment picture, macroeconomic forecasting, etc. etc. the loan scoring accuracy enters an entirely new realm.
Score loans better and you can price them better.  Price them better, and you make a lot more money.
It doesn't take a rocket scientist to grasp the impact.
He gave another example of how using external social feeds greatly changed a core process everyone uses: hiring and recruitment.  And -- without too much effort -- you can come up with hundreds of core business processes across industry after industry that fundamentally change in the face of advanced analytical insight.
Jeetu flatly stated that most core business processes would be re-engineered along these lines over the next five years.  I have to agree -- it's inevitable given this perspective.  And I'm rather glad that an important division of EMC (IIG) is creating the enabling technology (xCP) to exploit the business value gleaned from big data analytics.
Stepping Back A Bit
For many of us, we see big data as the next important frontier for creating new value from information.  Yes, there will be plenty of cool technologies (at massive scale!), but the real challenge will be creating end-to-end environments that help organizations move from raw, unfiltered data feeds to critical insights and the ability to react as part of their core operations.
Exciting times indeed.  
And I feel privileged to work for a company that's investing in this brave new world.

By: Chuck Hollis

Monday, September 26, 2011

Bigger Isn't Always Better

During most of my career in IT, I have tended to hold the larger IT shops in a certain awe: their large and diverse organizations, their ability to field large IT projects, and -- especially from a vendor perspective -- their unquestionable spending power.
I sometimes wondered how the more modest-sized IT shops could ever keep up with the big guys?
No longer.  When it comes to IT transformation -- changing how IT operates to look more like an internal service provider vs. a traditional technology-and-project shop -- the smaller IT teams appear to be winning the race.  They're getting their sooner and delivering results far faster than the behemoths.
And now it looks like many of the bigger IT shops may end up paying a painful price for their size and complexity.
Motivations For Change
If I had to boil this whole cloud and IT-as-a-service thing down into a simple, concise thought, it'd be that -- for the first time in recent history -- IT organizations have to compete for the internal business.
No longer can IT assume that they've got a monopoly any more.  There are too many great external IT services out there, and they're far to easy to consume by us business types.
I start my keynotes these days with a picture of my new IT department.
Yes, it's jarring, unfair, unworkable, etc. -- but it's also the cold, sober truth.
Fail to successfully compete for internal IT business, and bad things can happen.
For starters, IT can quickly lose control over IT consumption, and ultimately a big hunk of their relevancy.
Fail to maintain control (or be relevant) and it gets worse: outsourcing, budget reductions, new leadership installed and so on.
Just like any other function, the business demands competitive and attractive services that meet the needs of the people consuming them.  Otherwise, they'll go elsewhere.
The IT Model To Compete Changes As Well
Organize yourselves to run classical IT projects across the traditional technology disciplines, and that's what you'll end up delivering: classical IT projects across the traditional technology disciplines.
Organize yourselves to deliver attractive and competitive services that your stakeholders want, and -- hopefully -- that will end up being what you deliver.
Indeed, internal IT organizations are uniquely positioned to compete for their internal business for one powerful reason: they have the potential to understand their internal customers better than any other external provider.
Own the relationship; own the customer -- it's something those of us on the vendor side of the business have known for a long time.
And now we're teaching it to many of our IT customers who are now unfortunatley being seen as one of many alternatives as opposed to a de-facto monopoly.
If you haven't seen my previous posts on this subject, you might be interested in this and this.
When (Large) Size Works Against You
IT organizations come in all shapes and size.  When the total number of badged IT employees gets north of around 500, the size of the organization itself tends to be the source of problems.
At 500 IT employees, you've got enough heft to create some good-sized silos (empires?) within IT: server team, storage team, desktop team, application team, database team, network team, security team, operations team, etc. etc.
Each of these teams has clearly established roles and responsibilities.  Each has a way of doing things that they're largely comfortable with.   Each sub-group will likely be resistant to meaningful and substantive change -- which is precisely what is required.
The larger and more entrenched an IT organization is, the more daunting this challenge.
When (Smaller) Size Works For You
The smaller IT shops (10-100) are quite different.  People tend to play multiple roles just to get the work done.  There's a lot of collaboration and rotation.  Processes and workflows tend to move quickly and fluidly -- just to get the job done.
Very little turf and empire building -- there's just not enough scale to make that practical.
Most importantly, these smaller IT groups tend to avoid walling themselves off from the rest of the organization.  They're engaged and supportive of what their users are trying to get done -- they're not locked away in some IT organizational castle surrounded by a beauracracy moat.
As such, these teams are transitioning to an IT-as-a-service model far faster (and with far less drama) than their larger IT counterparts.  They're just getting on with it.
As an example, check out this video done at the recent EMC World.  Yes, it's a promotional piece, but look and listen to the people being interviewed.  They're making the change.
And they appear to be largely enjoying the experience.
How Large IT Organizations Can Act Small
If larger and more entrenched IT organizations are finding that it's becoming difficult to transition to a new IT operating model, what can they do about it?
One popular approach seems to be around large-scale change management within the existing organization.  Lots of leadership meeting, lots of projects, lots of grinding away at the pieces until they eventually fit into the new model, or leave.
I meet many larger IT organizations that are doing exactly that -- large-scale change management.  Usually it involves a new senior IT leadership team that's been brought in to turn things around.  But it takes a lot of time and a lot of management effort -- and outcomes can be unsure.
No one in the meeting seems to be having fun :)
A second, more creative approach is the dual-IT model.
Take the entrenched legacy organization, and put them in a conceptual "box".  Let them use existing processes and technologies to do what they always have done: deliver classical IT services.
At the same time, create a small, focused "new IT" team.  Give them the freedom to build IT differently (shared pools of resources), operate IT differently (delivered as a service vs. chunks of technology) and consume it differently (convenient for everyone to consume).
Make them responsible for creating and delivering attractive and competitive IT services that others need.
Point them at use cases that aren't being met well by the existing approaches, including use cases within IT itself.  Give them some time to mature their processes and capabilities.  As they improve, consider them as a candidate for some of the new requirements coming down the pike.
Don't make the "new IT" guys have to work closely with the legacy team, be forced to go to their meetings, be bound by their legacy processes, etc.  Otherwise, the new guys will get smothered by the legacy team.
It's a separate team with a separate mission.
If your experience is anything like ours at EMC, over time your "new IT" team will be the model for most of the enterprise IT experience: attractive and competitive services exposed to other IT organizations and end users.  Pervasive use of shared and pooled resources across the organization.  Aggressive outreach to your internal "customers".  With roles, workflows. process and tools that don't look like what you're using today.
Congratulations, you now have transformed into a competitive internal IT service provider :)
Based on my personal observations, this sort of dual-IT approach can greatly accelerate the transformation as compared to the brute-force organizational change management approach.
Your users will see the benefits far sooner, for one thing :)
Envisioning The "Legacy Free" Data Center
Last week, our IT team opened up a brand-spanking-new cloud data center that is in many aspects the natural outcome of this approach.  I use the term "legacy free" to describe not only the infrastructure (Vblocks throughout, a single version of vSphere, exceptional use of power and cooling, etc.) but also the operational processes as well.
Like other parts of EMC's IT investment, we're opening it up so people can see (and hopefully learn from) what we've done -- hence the Center of Excellence (COE) designation.
I think it's fair to say that the work and planning done to move into this environment helped speed up many of the transformation projects associated with EMC IT's re-engineering.
Nothing like a hard move-in date to help speed the decision-making along :)
Size Does Matter, But Not How You Might Think
In one sense, EMC IT's transformation is exceptional, given the size, scope and complexity of EMC's global IT operations.
In another sense, it's no big deal when compared to what I see day-in and day-out from much smaller IT organizations.
Their limited size and scope forces them to move quickly, make key decisions faster and align themselves with their internal customers in a way that's often quite rare in larger settings.
To all of you more modest IT organizations that have fully embraced virtualization and the IT-as-a-service model: congratulations!
Your larger peers are becoming jealous :)
  
By: Chuck Hollis

Thursday, September 22, 2011

Caution: Automobile Vulnerabilities are Closer Than They Appear

Remember when automobiles provided basic transportation and little else? Well, those days faded from sight in your rear-view mirror years ago.

Today’s cars can be started by your mobile phone and disabled from the Internet. Soon, some will even drive themselves. Many new models feature state-of-the-art infotainment systems, social networking capabilities and in-vehicle Wi-Fi hotspots. Built-in navigation systems and voice-activated controls are standard equipment these days—not just in high-end cars, mind you, but in entry-level Fords as well.

Mercedes, BMW and Lexus promote mind-blowing safety features such as blind-spot monitoring and dynamic stability control. There’s even a technology that measures a driver’s alertness and responsiveness and one that detects the presence of obstructions in the road ahead. However, as auto manufacturers continue to rush these new features to market, security cannot continue to be an afterthought

Gone in sixty seconds
A person’s personal information is far more valuable than the vehicle they drive. And, given the amount of personal information showing up in cars, autohacking is poised to explode. Will the next chapter of “Gone In Sixty Seconds” or “Grand Theft Auto” involve drive-by hacking?

Guaging Your Preferences
Wherever personal information is available, there’s money to be made. Onboard systems that provide access to email, voicemail, social networking and location-based media offer a treasure trove of valuable personal information. New vehicles now contain RFID tags in the rims that transmit tire-pressure information to the car’s control systems. Researchers have proven that cyberthugs can use these and other wireless transmissions to hack into the car’s digital systems to compromise passenger privacy.

Targeting embedded devices in automobiles is already happening. A provider of aftermarket GPS systems was recording driver behavior and selling it to Dutch police, who used data to target speeding vehicles. Perhaps the police should focus on their own security. One security expert was able to easily hack into onboard police cruiser systems, access dashcam video storage and copy and delete these files.

And what about the personal safety of drivers? Cellular signals from mobile phones and navigation systems now pinpoint a person’s location. Imagine the cyberstalking vulnerabilities that could be exploited by understanding a person’s behavior pattern, tracking their location and being able to remotely disable their vehicle. It’s scary, creepy—and yes—very possible.

Avoiding Digital Roadkill
Here’s the good news: There’s no need to become digital roadkill. There are proven technologies that make it easy to secure these systems. I believe that in the near future, security will be a key differentiator in the new breed of intelligent automobiles.

It’s fascinating stuff. If you’d like to learn more on this topic, I encourage you to download a recent report produced by McAfee and Wind River. It’s titled “Caution: Malware Ahead, An Analysis of Emerging Risks in Automotive System Security”.



By: Tim Fulkerson

 

Wednesday, September 21, 2011

Teaching Your Kids to [File] Share is not Always a Good Thing

My oldest cyber-son was about 11 when he came home one day from a play date and said his friend’s dad works for this cool company and gave him some free software. I really didn’t think much of it and let CS #1 install said software.

Within days we suddenly had all kinds of new music on the computer. He explained that it was a service that allowed users to share music with one another. The software was Limewire  and I had somehow allowed a kid to convince me that file sharing was okay.

It wasn’t very long before I figured out that not only was he illegally downloading music, but he was also opening up our computer to viruses, malware and potentially criminal hackers. We uninstalled the software and deleted the files pretty darn quick!

According to this months McAfee Security Advice Center Newsletter:
“P2P networks not only allow you to share fun content like music and movies, but they also allow you to share any file on your computer. This means that your child could download sexually explicit content from other users, and accidentally share sensitive personal information stored on your computer. Malicious content, such as viruses and spyware, can also be easily spread over P2P networks. Cybercriminals often hide viruses and even porn in popular downloads, such as popular songs or games, hoping they can trick users into downloading them.”

The tips in the McAfee article are:
-Protect your computer with a strong password so your child cannot log in without your permission and supervision.
-
Remove the P2P application altogether. (A quick online search will help you find directions on how to remove various applications.)
-
Consider using parental control software, such as McAfee® Family Protection, which allows you to filter the online content your child has access to and block objectionable content. It also allows you to monitor their activities, such as give them time limits when surfing the web.
-
Make sure that your family computer has a safe search tool, such as McAfee® SiteAdvisor® software, which alerts you with site ratings in your search results.

However, I wanted to take it a step further, because before I was aware of this type of program, I really had no idea how to know what my kids were downloading when I wasn’t around.

-Occasionally check your computer for newly installed programs. (Some names to look for are KaZaa, Limewire, Morpheous, Grokster, iMesh, Blubster, Bearshare)
-
If you are not sure what a program is, use a search engine to learn what the program does and decide if it needs to be uninstalled.
-
Remind kids that they need to ask permission to download anything onto the computer.
-
Check out this post by Verizon for tips and FAQs about figuring out if you have illegal files on your computer.
-
Sign up for the McAfee Security Advice Center newsletter to learn about the latest threats.

I now have McAfee Family protection installed and have it set to block all P2P file sharing. I also make sure I ask lots of questions of my kids when they want to install a new program. They may think a program is “cool”, but they don’t have any idea of the dangers.
Stay safe out there my friends!

By: Tracy Mooney

Tuesday, September 20, 2011

Telepresence An Indispensible Technology for K-12 Classrooms

If you were to walk into any school these days—whether an elementary, middle, or high school—you would see students using some degree of technology. Whether it’s a computer in a lab, a tablet, or an interactive whiteboard, technology has no doubt made its way into students’ schooldays.
The trend towards technology in education stands to proliferate: according to Education Week, the Obama administration and the U.S. Department of Education rank facilitating technology access as their top goal during tough economic times. With this goal in mind, telepresence should rank highly on the list of technologies designated for schools—after all, telepresence offers several solutions to maintaining education quality under ever-tightening budgets.

Beyond addressing fiscal concerns, telepresence offers unique technological benefits that further some of today’s missions for educational progress. For example, the same Education Week article notes the rapid spread of E-learning, even in K-12 classrooms. Some students attend entirely virtual schools, while others take part in hybrid virtual and in-person models, the article said. No matter the extent of the virtual education component, telepresence can enhance these students’ experiences: they can go anywhere, speak with anyone, and see anything, provided a telepresence connection exists.

In addition to E-learning, project-based teaching methods increasingly transform grade-school classrooms. While I’ve written a bit before about the power of telepresence to enhance the project-based learning experience through sharing of knowledge with other students, I came across a post on the Techno Kids Computer Curriculum blog that made me think of even more ways telepresence could help make project-based learning powerful. The post discussed how technology in general could enhance project-based learning.  Specifically with telepresence though, students working on projects could gain face-to-face access to experts in their topic of study, motivating them to inquire more deeply into the subject. Using telepresence to speak with experts, community organizations, museums, government agencies, or other relevant entities helps build communication skills and confidence, both qualities that contribute to independent, self-directed learning.

Are you using telepresence to expose young learners to the world?

By: Kerry Best

Monday, September 19, 2011

Carbon Disclosure Project 2011 global launch—Walking the low-carbon talk

This past Wednesday morning, Carbon Disclosure Project (CDP) put on their annual global launch via Cisco TelePresence.  What CDP “launches” is a PricewaterhouseCoopers report on the responses to CDP’s 2011 Investor survey.

Cisco did very well.  We again made both the CDLI and CPLI (Carbon Disclosure/Performance Leadership Index).  Cisco had the top disclosure score in the Information Technology sector.  In 2009 and 2010, we were #1 and #2, respectively, so we’re maintaining our focus.  In general the IT sector seems well engaged on carbon reporting, judging by participation rates—95% of Global 500, IT-sector companies responded to the CDP survey (38 of 40).  That’s a higher percentage than any other sector.
Congratulations as well to SAP and Sony for rounding out the top three in the IT sector.  I was in one of the Cisco TelePresence rooms with Peter Graf, SAP’s CSO, and got to give him the good news!
The event used Cisco TelePresence units at nine locations, shown below:


CDP Launch Cisco TelePresence locations
For this virtual event, CDP assembled an array of 18 speakers that represented a broad range of perspectives.
  • corporations (IT, banking, retail, chemicals)
  • environmental advocacy
  • investor
  • United Nations FCCC
  • government
Within these groups, CDP also captured the developed and emerging markets viewpoints.
As I watched, I pondered Marshall McLuhan, the medium (Cisco TelePresence) and the message (from those 18 people scattered about the world).  With deft facilitation by Paul Dickinson (CDP Executive Chairman), we were treated to many speakers, many different perspectives, but each delivered quickly and with compelling intimacy. Going forward, is this how progress is going to be made on intractable problems?  Through this portal—”metaphorical table” as one speaker called it—will we be able to assemble the critical mass in terms of knowledge, geography and function to move the needle on a low-carbon economy?

Watch the recorded broadcast (direct Ustream, CDP website) and let us know what you think. (The streaming quality is fantastic; I watched afterward full-screen on a 27″ iMac and was mesmerized.)
Let’s move from the macro to the micro, from big ideas to individual responsibility and action.
Hats off to CDP for taking a risk and changing how they do business, choosing to walk the low-carbon talk.  I’m sure CDP’s annual global launch is very important for their organization and its mission.  In past years, CDP held an in-person event in New York City to coincide with UN opening week and CGI. (Cue the airplanes.) Last year, CDP began the transition to virtual, adding a Cisco TelePresence unit on stage that connected the auditorium to locations on five continents.  This year, CDP made the leap to all virtual, and provided a wonderful example of bringing together a far-flung and unique group to share views on a very difficult problem.  And none of these executives and leaders spent days and flew many thousands of miles for this discussion.

The technology exists, it works and it’s cost effective.  It takes effort to change, but the upside is intriguing.  Climate change is a global problem, but the solution will be built from billions of people making thousands of individual decisions.  Everyone trying new ways to live, work, play and learn.  So each day, think about your decisions and how you can lead the way.

By: Darrel Stickler

Thursday, September 15, 2011

Solving Education Budget Crises with Telepresence

As we’ve talked about before, Hillcrest High School in Riverside, California has state-of-the art facilities. But, it has no students. Financed with $105 million of bond money allocated in 2007, the school now lacks the $3 million it needs from the state to operate for one year. California state budget cuts of $18 billion, one-third of the state’s education funding, keep Hillcrest’s halls and classrooms empty.


In similar dire straits as California, Minnesota’s state government this summer borrowed $2.2 billion from its public schools to end a government shutdown. The state has not set a date by which to pay the schools back.

California and Minnesota reflect the unstable conditions of the education budgets of several states across the nation. With each successive year schools make larger cuts, cram more students into crowded classrooms, and lay off more faculty.

As the economy struggles to recover, schools need an immediate solution. With telepresence, school districts can make one-time investments that provide all of the equipment necessary to take students anywhere in the universe—other planets included. The expense of a telepresence installation pales in comparison to the funds needed to build Riverside’s Hillcrest High School, and it requires no ongoing money to rollout, change, enhance, and expand programming. I’ve written before about how telepresence technology takes students places that tight field trip budgets might not permit, and it bring students to classrooms where they access courses their schools don’t have the resources to offer.

It seems, in today’s economy, school districts can’t afford not to have telepresence connections. The technology stands to keep education afloat, to keep students learning by exposing them to the material they need to learn to become creative, analytical, capable contributors to society. I understand funds are tight, and it’s because of this tightness that education financing needs a new direction, with innovative investment options that further educational quality. Investing in telepresence leads to immediate, substantial educational returns that continue for years to come.

What do you think? Could telepresence help your local schools maintain their quality during difficult financial times?

By: Kerry Best

Wednesday, September 14, 2011

New Security Thinking For A New World

Steps As an armchair student of the big transitions that happen in the IT industry, I've learned that the hard part is getting your mind wrapped around a relatively new perspective.  We, as technologists, tend to focus on how things work, rather than what they might mean.

For example, the internet isn't really about packets and DNS so much; it's about what can happen when connectivity becomes ubiquitous.   Tablets and smartphones are really about when consumption becomes ubiquitous as well.

Cloud isn't about sterile NIST definitions, it's what happens when IT organizations realize they're no longer a monopoly and have to compete for the business. 

Big data isn't a capacity problem; it's really about learning a new form of creating value from massive amounts of information.

Get the right perspective on a specific transition, the rest becomes mostly a matter of evangelism  and execution.  Fail to get the right perspective, and you won't make much progress at all.

And so I enjoyed reading the recent findings of the RSA-sponsored Advanced Threat Summit. 

The Back Story

The latest challenge on the security front isn't necessarily an exotic new threat vector: it's the attackers themselves.  They're organized, well-resourced and patient.   And there's no silver technology bullet to effectively combat them.

If you'd like to read my quick backgrounder, it's here.

Although I am most definitely not a charter member of the IT Security Inner Ring, I do watch closely how the discussion is evolving here, and it appears to be moving very fast indeed. 

Many historical precepts about how to think about IT security appear to be quickly falling by the wayside.  IT security organizations are now re-thinking how they're organized and how they think about their job.  And all sorts of newer technologies are getting pulled in alongside more traditional ones.

It's A Good Time To Be Talking

During periods of rapid transitions, meaningful conversations between key stakeholders are incredibly valuable.  Towards that goal, RSA and TechAmerica recently sponsored an invitation-only Advanced Threat Summit with a list of participants and speakers that reads like a Who's Who In The Security World.

The good news?  You can get a quick synopsis of the key findings here.  The better news?  Interest in the topic is understandably sky-high: you'll be seeing more events being scheduled before long.

It's An Interesting Time To Be An IT Security Professional, Too …

Most IT security pros I meet tend to be under-appreciated, working tirelessly in the background to protect valuable information assets.  For many organizations, coming up with an enhanced approach to IT security is now front-and-center.   IT security managers are now being asked to be IT security leaders.

It's not an IT discussion anymore; it's becoming a business discussion.

But, if I'm being honest, my impression is that more than a few IT security professionals will need to step up their game to be effective in this new world.

Lockmouth For starters, I've noticed a natural tendency to only discuss security matters with a relatively closed group of other security professionals using mostly impenetrable language.

That's not good. People outside of the security world need to understand what's going on here, and why it matters to them.

The sometimes-alarmist tone of external communications has to give way to a clear-headed and sober view that there's a new class of problem out there, and organizations are going to have to invest in a new class of responses.

 There will be good days, and not-so-good ones.  There will be no perfect solution.

Inevitably, business leaders will need to invest the tools and processes to understand and mitigate the new class of information risks just like they understand and mitigate financial risks, geopolitical risks, legal risks and so on.

And in one sense, the new security discussion really isn't all that new :)

By: Chuck Hollis

Tuesday, September 13, 2011

Has Your Patient Data Made You a Victim of Medical Identity Theft?

How would you like it if you went to the emergency room for psychiatric services and that information ended up online along with your diagnosis, treatment and bill? This happened to 20,000 Stanford emergency room patients. According to an article in the New York Times, the information showed up on a website and remained there for a year. Not all of those patients were psychiatric patients, but it doesn’t matter what the diagnosis for those patients was, I am sure not one of them wanted the information posted online.

According to security expert Linda Criddle:
“In an environment where no data is compromised and where patient privacy was assured, the benefits of instant access to complete, accurate, medical records are obvious. Treatment errors could be reduced, tests could be streamlined, communication and collaboration between multiple care providers could be optimized, and emergency room physicians could have immediate access to the medications and allergies of incoming patients…

Instead, we live in a world where cybercriminals consider medical records a golden goose, enabling millions of dollars in revenue  from false billing. A world where the threat of blackmail over potentially embarrassing medical information or more innocent forms of exposure may induce patients to withhold important information from their care providers, and where medical histories, falsified by criminals to procure prescription medications, may in fact harm, even kill, patients as doctors assume the information is accurate. Until data went online, medical record theft was restricted to people breaking into a doctor’s office or disgruntled employees.”[1]

Until our health records are secured by the medical industry, it is up to us (the consumer) to keep a watchful eye on our medical identity, just like we do on our financial and social identity. Here are some ways that you can minimize or at least stay on top of medical breaches of your identity:
-Watch for medical bills that you did not incur, this also includes checking your credit report.
-See this publication for the signs that you may be a victim of Medical Identity theft.
-Click here for your medical record rights by state.
-If you believe your privacy rights have been violated, such as a doctor will not give you a copy of your medical records, file a complaint with the U.S. Department of Health and Human Services’ Office for Civil Rights here
-Visit The Medical Identity Theft Information Page and Mitigating Medical Identity Theft if you think your medical identity has been stolen.

Stay safe out there!

Tracy

By: Tracy Mooney

Monday, September 12, 2011

Friends, Foes and Faceless Denizens – The Real Social Network

I recently performed a penetration test of a transportation company in the Midwest. Save for a few low-severity vulnerabilities, Company X had a well-managed public-facing network infrastructure.  Satisfied with the status of their network security, I turned my attention to the human network.

Searching for Company X on sites like Twitter, Facebook, and LinkedIn, I discovered employee names and corporate activities that were not shared on its website.  As the search continued, Company X’s culture, processes, and lexicon emerged from the social dialogue. Within three hours I was able to collect identifying information on key employees including birth dates, employment and educational history, and hobbies. These data points were cross-referenced with other resources on the Internet to profile Company X’s community involvement activities.

This information enabled me to persuade employees to give me access to critical information and secured areas. This included usernames, passwords and access to employee-only areas.

Unfortunately, this was not an isolated scenario. McAfee recognizes that social media is used by miscreants as an “effective means to reach you as an individual user or an employee of a targeted company.” In a recent campaign, it noted that the disclosure-based trust model used by social networking sites also makes them vulnerable to miscreants that mine this data or persuade users to click on links that execute malware.

From a corporate governance perspective, social network threats are difficult to manage. Even if a company excludes itself from social networks, it has little control over their employees or customers’ activities. For example, the successful compromises of physical security on my social engineering engagements have been enabled by information gleaned from Facebook / MySpace pages run by company employees. In these cases, the guise created from my research allowed me to influence employee behavior to circumvent logical and physical access controls.

Individual users should also be cognizant of the privacy threats associated with social media disclosures.  Early in 2010, I researched three individuals from my Toastmaster’s chapter to illustrate how much information could be mined from the Internet. At the end of the two-day project, I had discovered their home and work addresses, the common restaurants they frequent, and their affiliation with community groups.  Shockingly, I was also able to find medical information for one of the individuals, including her condition, her primary doctor, and the hospital where her doctor works. This threat is complicated by the prevalence of targeted phishing and malware attacks directed at users based on their profile activity.

Some approaches proposed to address these threats attempt to model the formation of trust relationships in the physical world. Security Issues in Online Social Networks proposes architectures wherein “users should dictate the fine-grained policies regarding who may view their information.” The solutions would require individuals or companies to create data classification policies associated with their social media presence. However, the stakeholder support that these solutions require makes their implementation problematic. Additionally, they limit the data available to the social media provider in its effort to generate advertising revenue.

The successful use of social media management by one of my clients points to a practical response to these challenges.  Embracing Facebook and Twitter as part of the marketing and sales campaigns, it recruited its employees in promoting a positive brand image. Thus, employees are shown the value of the messages they share online. More importantly, the organization’s security awareness program focuses on the social interactions that intersect with logical and physical controls. These efforts have resulted in fewer negative findings during social engineering engagements.

The hunger for connectedness and trust are core to the challenges posed by social networks. Their secure usage lies not with an artifice of policy templates and FUD. Rather, it relies on recognizing the value of your information to those that would abuse it.

Social engineering is a topic we plan to cover more on future Security Connected posts. For more details and regular updates on McAfee happenings and infosec news, join the conversation on Twitter by following us at @McAfeeBusiness.

By: Steven Fox