Thursday, July 28, 2011

Phishing Brazilian Brands

Symantec keeps track of the brands targeted by phishing and monitors trends in the countries in which the brand’s parent company is based. Over the past couple of months, phishing sites have been increasingly targeting Brazilian brands. In May and June, the number of phishing sites on Brazilian brands made up about 5 percent of all phishing sites. This is an increase of nearly three times that of the previous month. The phishing Web pages were in Brazilian Portuguese. The most targeted brand in these phishing sites was a social networking site.
 
Below are some noteworthy statistics on the trend observed:
 
  • The majority of the phishing on Brazilian brands, approximately 58 percent, used IP domains (e.g., domains such as hxxp://255.255.255.255). 
  • Twelve Web-hosting sites were used to host 4 percent of the phishing sites on Brazilian brands.
  • There were several banks attacked in phishing and the banking sector made up about 39 percent of the brands targeted. Phishing of the social networking sector primarily targeted one single brand and comprised 61 percent of the total. The remaining phishing sites (approximately 0.5 percent) spoofed an airlines brand.
  • Approximately 64 percent of the phishing sites were created using automated phishing toolkits. The remaining 36 percent were unique URLs.
 
As a majority of the phishing attack came from automated toolkits, we understand that phishers are trying to target more Internet users from Brazil. With the possession of these toolkits, phishers are able to create phishing sites in large numbers by randomizing URLs. Below are two randomizing URLs used in the toolkits:
 
  • hxxp://***.***.***.***/~namo/login011/?accounts/ServiceLogin?  [IP removed]
  • hxxp://***.***.***.***/~namo/login008/?accounts/ServiceLogin? [IP removed]
Domain names used in phishing sites of Brazilian brands
Figure 1: Domain names used in phishing sites of Brazilian brands
 
Internet users are advised to follow best practices to avoid phishing attacks:
  • Do not click on suspicious links in email messages;
  • Avoid providing any personal information when answering an email;
  • Never enter personal information in a pop-up page or screen;
  • Frequently update your security software, such as Norton Internet Security 2011, which protects you from online phishing.
By:  Mathew Maniyara

Wednesday, July 27, 2011

Operation Phish Phry - Revisited

Like the career of a one hit wonder pop star, it started with a bang and went out with a whisper. Almost two years ago, the big news was about Operation Phish Phry. In October 2009, the FBI announced that almost one hundred people (half here in the US, half in Egypt) had been arrested for running a phishing ring. At the end of June this year, news reports announced the sentencing of Kenneth Joseph Lucas, who was the key US figure in this crime story. Convicted of 49 counts of bank and wired fraud, Lucas was sentenced to 13 years in federal prison.

Lucas is not a hacker. He ran the money mules in the US who opened accounts for the hackers in Egypt to deposit their stolen money into. The Egyptian hackers stole logins and passwords from the customers of US banks and then transferred people’s money into the accounts the money mules had set up. The money mules withdrew the money, kept a piece for themselves, and passed the rest on to Lucas. Lucas kept his share, and passed the rest on to the Egyptians. The feds say the gang raked in more than one million dollars.

If you blinked, you will have missed the news of his conviction and sentencing. It wasn’t given a lot of coverage. And, what coverage there was did not have a lot of detail. If Lucas had not also showcased his prowess in growing marijuana in a YouTube video, he may not have even gotten the couple of paragraphs of coverage he did get.

I’m glad justice was done, but I can’t help but be a little disappointed. My hope in 2009 was that Operation Phish Phry would send a message to those involved in cybercrime. So, the message is there—13 years! But, it wasn’t that loud. Front page news of the sentencing might have helped get that message out, but I’ll settle for what we got.

If anyone knows what happened to the other people that got arrested, especially the Egyptians, drop me a line or send me a tweet at @kphaley.

By: Kevin Haley

Tuesday, July 26, 2011

Android Threat Trend Shows That Criminals are Thinking Outside the Box

A quick online search would reveal a number of articles declaring any one of the last few years as being the “year of mobile malware.” Conversely, these searches also reveal claims that the same years are not going to be the year of mobile malware. These search results go back as far as the early part of the decade. The contradictory nature of these bold predictive headlines could be explained by the fact that the articles are typically written at the beginning of each year—and who knows what the year may hold at the outset?

But, if the criteria to qualify 2011 as the real "year of mobile malware" was to be challenged, then surely the events of the past few weeks alone should be enough to justify the fact that this year truly has seen considerable seismic activity that has shifted the tectonic plates of the mobile threat landscape.
 
 
 types and targets
Figure 1 - Mobile malware, 2011: types and targets
 
The message that is coming through loud and clear is that the creators of these threats are getting more strategic and bolder in their efforts. We are seeing increased attempts to complicate the infection vectors of mobile malware to the point where a simple uninstall is insufficient.
 
Multiple payloads
 
One such strategy is to separate the malicious package into staged payloads. The idea is simple: instead of having one payload that carries all of the malicious code for any given attack, break the threat into separate modules that can be delivered independently. There are several advantages to deploying the threat in this way. First, it obviates the tell-tale sign of a huge, overzealous permissions list accompanying the installation of the threat, which may alarm the user as to the intention of the malicious app. Secondly, smaller pieces are easier to hide and inject into other apps. Furthermore, dispersing the attack across separate apps complicates the integrated revocation processes from the service provider, marketplace, etc. 
 
Dispersed payload process of mobile threat
Figure 2 - Dispersed payload process of mobile threat
 
A textbook example of this is the newly discovered variant of Android.Lightdd. Apart from a few minor variations, such as the service name running in the background now being called “Game Services” and the three new domain names that it attempts to connect to, everything else remains the same as the previous samples discovered last month. This includes the encryption routine and the keys used to hide data within the threat.
 
Data-gathering process of Android.Lightdd
Figure 3 - Data-gathering process of Android.Lightdd
 
This threat is the first stage in a multi-payload delivery system, responsible for reconnaissance and intelligence gathering (model, language, country, IMEI, IMSI, OS version) on the compromised device, which precedes the downloading of additional payloads. 
 
 "Game Services" running in Android.Lightdd
Figure 4 -  "Game Services" running in Android.Lightdd
 
An interesting fact is that the threat was capable of downloading additional components and updates through official channels of distribution as well as Internet/direct downloads. At the time of writing, all of the hosts associated with the threat are offline.
 
 
Example of additional components and updates through official channels of distribution
Figure 5 - Example of additional components and updates through official channels of distribution
 
 
Overcoming the user acceptance hurdle
 
As with its previous variant, Android.Lightdd still requires the user to accept the installation of any download—a major obstacle in this model of delivering a payload. However, another threat also discovered in the wild, Android.Jsmshider, has found a way to overcome this obstacle.
 
By signing the payload with an Android Open Source Project (AOSP) certificate, the threat was capable of performing further downloads without any interactions or prompts, as the underlying device considered the payload to be a system update by virtue of the accompanying certificate. At this point, however, this deception only works for custom modifications.
 
Example of Android.Jsmshider exploiting Android Open Source Project certificate
Figure 6 - Example of Android.Jsmshider exploiting Android Open Source Project certificate
 
Given the relatively elaborate installation of this threat, you would think that the final payload being deployed would rival something akin to the Stuxnet worm, but in fact, the final payload in the majority of cases was nothing more than a garden-variety premium SMS sender. Premium SMS senders and/or dialers don’t get a lot of respect from antivirus researchers, mainly because they lack sophistication and, just like those emails that we all get from a distant contact promising us a cut in the deal of a lifetime, depend largely on social engineering for a payoff. But, they have been around for ages and, as far as mobile threats go, have the quickest ROI for their authors. 
 
There is plenty of research demonstrating that the average price of a stolen credit card (due to competition and market forces, etc.) has dropped to as low as $0.40 – $0.80 (USD) per unit. In contrast, the latest dialer to be discovered that was targeting North America would pay the author $9.99 per successful install and execution. Furthermore, if the threat is not detected by the user, each subsequent execution would result in a continuous revenue stream—until the owner of the device saw his or her next phone bill, that is.
 
Another interesting trend that Symantec has observed is the use of in-app features that facilitate the promotion and/or download of other apps. In some cases, we have seen this implemented as full-fledged browsing access to another third-party app store that has been embedded as undocumented functionality of the original app that the user has downloaded from the official marketplace—without any indication that the victim is downloading or browsing apps from another website or store.
 
Example of in-app features that facilitate the promotion and/or download of other apps
Figure 7 - In-app features that facilitate the promotion and/or download of other apps
 
Even though user interaction is required to install additional apps, there is a concern that this vector has an element of social engineering, whereby the user assumes that, since the first app was downloaded through an official channel, any additional apps must also be originating from there, too. Since there is no indication to the user that he or she is downloading from a third-party site, an element of trust might be established with this particular vector.
 
All things considered, the real question that comes to mind is: if this truly is the “year of mobile malware,” where do we go from here?
 
By: Irfan Asrar

Monday, July 25, 2011

Like a few people I know, I slavishly follow the economic news from around the world. 
Sure, I have a casual background in economics, but I've always thought of economics as the engine that powered so much of human activity.  That, and demand for EMC's products and services are somewhat correlated with economic swings :) 

My current fascination is the US economy -- not only is it the largest by some measures, but -- historically -- it tends to often go through key transitions before other regional economies.  It's not a bad "early barometer" to watch strong forces play out globally -- if you're watching carefully.

Without getting delving into politics, I think the current US economy can be best described as "uneven" or even "lumpy".  At an aggregate level, there's the perception that there's not enough growth or job creation to get things moving in the right direction.

Indeed, the current round of spectacular policy debates seem to arise from fundamental disagreements regarding what to do about the situation vs. any disagreement that there's a serious challenge at hand.
For me, I see the US as a tale of two very different economic models being measured as a whole: the tail end of a successful-but-getting-tired legacy model, and the early days of its powerful successor. 
Focus on the legacy model, and I see a picture of a running-out-of-steam economic model with few attractive options left for substantial rejuvenation. 

But focus on the newer model, and it's easy to be dazzled by the potential for growth and prosperity.

Thinking About Economic Growth  Without delving too deeply into current economic thinking, one popular thought stands out in this context: when you care about creating growth and jobs, it all boils down to worker productivity. e.g. how many people are working, and -- more importantly -- how much economic value is created by each employee or contractor?  

Economies with high employment and high worker productivity tend to historically outperform those with relatively lower productivity.

The global economic pecking order often gets redefined when one player finds a way to dramatically increase their per-worker productivity, e.g. China.  Correspondingly, stagnation in worker productivity often correlates with a stagnant economy -- add your own examples here.

Classifying Economic Activity By Underlying Model 
You'll have to look hard to find others using this particular descriptive model, but -- for me -- it does a decent job of describing what I've been observing for a while now.  In my somewhat unusual world view, the activities of the private sector are the fundamental engine of the economy -- it generates the tax revenue that pays for government investments.

I believe that when the private sector does well, so does the public sector -- indeed, we all collectively tend to do well when the private sector is expanding.

As I frequently interact with many companies across the spectrum, I'll offer up my own two-part model that's helping me understand what I'm seeing.

First, there's the "classical" economy: goods and services that are built on a historically labor-and-process intensive model.  There's nothing unique here, it describes just about every traditional "physical world" business you and I are familiar with: manufacturing, transportation, services, etc. 
In this context, IT is mostly used to "automate" what was once essentially a physical set of activities.  No matter how whiz-bang the technology (or how much is spend on it!) you'll still see the structural bones of the legacy business model underneath.

In stark comparison, there's the "information" economy: goods and/or services that are designed to be information-and-expertise intensive

When you come across it, the entire business model has been designed (or re-designed!) around some very smart people with access to enormous and powerful information resources.  You can't easily find the antecedents of a traditional labor-and-process business model.

To be clear, this not entirely about the latest round of high-profile social sites with spectacular valuations (although they're not a bad example); for me, it is *any* business model that's been redesigned from the ground up to depend heavily on massive amounts of information complemented by very focused expertise.

Classification Isn't Always Obvious 
If you're intrigued by this particular two-part classification approach, you can't arbitrarily assume that one vertical or another belongs in one category or another. 

Take something as important yet familiar as "health insurance" in the US.  You'll find many players firmly entrenched in the classical model, a few new and very intriguing players in the new model, and all sorts of players trying to transition from old to new. 

Ditto with finance, manufacturing, transportation, energy, education, etc. etc.
When I meet with customer who's clearly in the first category, there are a few things I can guess about their business without even asking. 

For starters, they're probably not hiring in a big way, or -- more concerning -- might be going from layoff to layoff.  You go to their website and look what's open, and there's the odd hard-to-fill position here or there, but it looks strictly tactical, and not strategic.  Certainly, you don't get the feeling that they're on an economic growth agenda.

From an IT perspective (my primary interaction with these companies), IT is generally seen as an expense, not an investment.   You talk to the IT leadership team, and it feels that they're frequently getting tired and worn-down -- major challenges, no budget (or reduced budget), and so on. 
Although I'm as much a fan of IT cost savings as the next person, you can't save your way to prosperity, folks. 

When I meet with a customer who's in the second category (or moving in that direction), you get an immediate sense that they're seeing either dramatic growth, or the potential of seeing that growth very soon.

You go to their website, and they're looking for some pretty smart and ambitious people -- a lot of them.  They tend to value culture and attitude vs. specifics on the resume or CV.  You talk to their IT team, and -- yeah -- they're busy, but it's a *good* busy.  They've signed up for the mission of supporting an information-centric business model, and all that it entails.
For them, IT is clearly seen as a fundamental investment that's expected to pay off in huge ways, and not just another line item expense to be cut when times get tough.

When New Is Embedded In Old 
Yes, I meet plenty of IT organizations whose companies are either firmly in the classical model, or firmly in the information-centric one.  Once you figure out where they're coming from, the discussion is familiar -- it's either 95% old school, or 95% new school.

More frequent -- and more challenging -- is when you see the bright elements of a new model embedded in the old one.  Most of the organization (and investment) is slavishly supporting the legacy, and -- somewhere in all the IT projects -- there's a few very cool ones trying to get out -- ones that have the potential of transforming the business in a meaningful way.

For example, a strategic shift from B2B to B2C.  Or perhaps a move to delivering expertise via online applications and interactions vs. phone, email and customer visits.  Maybe recognition that there are a handful of roles in the company that really generate most of the value, and rallying the IT investment behind them.

Occasionally, a realization that big data analytics can do more for the business than simply answer rote queries faster :) 

There's more, but -- during the conversations -- you'll find one or more gems in the IT agenda that are clearly emblematic of a new economic model, and a significant departure from the old.

What Do You Tell These People? 
When I see this situation -- and it's frequent -- I used to be at a loss at what to do in front of the customer, right there and then.  Do I launch into extended discussion around my personal macroeconomic theories and information-based growth levers, et. al.? 

Yeah, right. 
Instead, I offer up a quick (and often palatable) talk track around "investing for growth" vs. "investing in improving current operations".  The business has to make decisions along those lines; I argue that IT (as part of the business) has some decent latitude about how it sees its own investment options.
A relevant example is the shift towards cloud and IT-as-a-service.  Is it all about saving money?  Or perhaps accelerating new initiatives in the business? 

One transformation, two distinct outcomes. 

Back To Economic Growth 
I believe that any realistic formula for economic growth involves increasing per-capita productivity.
When you look at per-worker productivity in any information-and-expertise intensive business, they appear to be in an entirely different league than the ones from a previous era.  Indeed, many developing countries are now starting to think in terms of investment in "information infrastructure" in the same way they'd think about transportation, communication and other infrastructure investments.
Developed economies, perhaps less so -- at least at a public policy level.

If you agree with me that information and expertise will underpin the majority of future successful economic models, the potential role of an IT professional can be an interesting one indeed. 
The recipe for future thriving business models may now include smart people who understand the newer forms of technology, and how those can be used to create entirely new businesses that aren't simply a reprise of familiar approaches.

I'm just waiting to see the first industry IT thinker describe themselves as "creator of economic wealth". 

It's not as outrageous as it sounds.

By: Chuck Hollis

Friday, July 22, 2011

Organizing From Silos To Services

I"ll let you in on a secret I've known for a while.

When I speak to IT leadership, the #1 topic that they're interested in -- by far -- is the organizational changes that result from moving to an IT-as-a-service model, e.g. "cloud".

Slide9 They understand that they need to move in that direction.  They understand it's a journey, and not an event.

But -- at the end of the day -- IT organizations are comprised of people and -- if you've ever led an organization -- it all comes down to the people: what skills, what roles and what structure.
And meaningful organizational change is a daunting task for any leader.

I've written before about the new skills and new roles in next-gen IT organizations, and I've given you an update on our internal progress (EMC IT) as we move our non-inconsequential IT function in that direction.

Today, I'd like to give you progress report on the structure, role and measurements of our recent 
Private Cloud Infrastructure Group within EMC IT.   To me, it looks like an organizational pattern we'll much seeing more frequently before too long.

If you're looking at this from somewhere in your IT organization, and thinking "gee, this doesn't apply to me", I'd encourage you to do yourself a favor and perhapsshare it with someone a bit higher up in the organization?

My guess is that they might find it interesting :)

From IT Silos To IT Services
There are many ways to compare and contrast traditional technology-and-project-centric IT organizations and newer IT-as-a-service ones.  For me, the most descriptive way to describe the transition is "from silos to services".

Just to be absolutely clear, by "silos" I'm referring to a predominance of specialized technology groups, e.g. Windows team, Linux team, VMware team, SAN team, NAS team, backup team, BC/DR team, security team, networking team, etc. etc. with extremely weak "connective tissue" between the disciplines.

Slide1 EMC's VP of IT Infrastructure and Services -- Jon Peirce (great guy!) -- has a very illustrative slide that looks at a strikingly similar before-and-after transformation that happened in the manufacturing industry, which -- interestingly enough -- was exactly where he started his career.

Consider the picture on the left of old-school manufacturing.

Lots of excess materials stacked everywhere.  People doing individualized and highly-specialized roles.   Not a whole lot of thought given to automation, process reengineering and the like.

Don't smirk too much -- our current IT environments aren't all that different.  For a profession that's  supposed to be proficient at technology, we often use it in very inefficient ways.
Now consider the picture on the right of modern manufacturing.

Slide2 Automation as the default.  
No people -- anywhere -- unless there's a problem.  
No waste.  

Completely optimized and matured processes.  
Things are measured and monitored vs. "managed" in a traditional sense.

Take a close look, please.  I think it's a decent proxy for the before-and-after that IT is going through.

If manufacturing -  or telecommunications or logistics or energy distribution or any other darn industry -- can make this sort of seismic transition, then certainly the lumbering and balkanized IT industry can do the same.

At least I hope so.

Key Roles -- Before and After
Slide3 I swiped this slide from KK, our lead architect within EMC IT, and -- if we had an IT CTO -- well, he'd be it.

I thought it did a good job of capturing some of the key functional transitions that were at play here.
For starters, consider the "design and architecture" role.

Historically, this has been a project-oriented role.
Each new application or project got its own design and/or its own architecture.  Maybe there was re-use of similar component technologies, and maybe some of the design patterns were roughly the same -- but the key point is that they did their job assuming that each application environment was designed to be implemented and run as separate entities, and not based on shared services.

This is in sharp contrast to the new version of the role, where the goal is design and architect a single multi-tenant environment that can be shared by as many applications (projects?) as possible.
Still a need for design and architecture skills, except they're building a small number of big things to shared vs. a large number of smaller things that aren't designed to be shared.

Next, consider the "build and operate" roles.
Slide4 Historically, the "build" role been acquiring, assembling and configuring the required components, and provisioning them to be used by other parts of the organization.

The "operate" role has been mostly monitoring, with a healthy dose of break-fix when something isn't working.

Keep in mind, this expertise is usually spread across a very wide landscape of different application/infrastructure combinations (one per app!) making repeatability and automation difficult.
In the new world, the roles are still important.

"Build" is more like "provisioning of services when needed" from the shared pool vs. physical assembly.  "Operate" has shifted to monitoring the processes vs. monitoring the individual components of the environments.

Perhaps the most significant change -- at least to me -- is in the front-end of the process, termed "product and service management" function here.

Slide5 Historically, these have been the people who (a) take new requests for resources and services and find out what needs to be done, and (b) generally take the lead when things break and need some deeper investigation.

In this new model, they're more like mini-entrepreneurs: they "own" their service: the definition of the service and its composition, consumption costs associated with the service, publicizing and promoting the service (whether inside of IT or outside), monitoring service delivery levels, and -- ultimately -- figuring out which services need to be retired (due to lack of demand) or new services are needed.

As Adam Wagner (one of the people in EMC IT who works in this group and is living the dream, so to speak) explains the role of the new services manager:  "It's just like a retailer with ten things on the shelf.  If five things sell and five don't, you go get more of the five that sell, and figure out how to replace the five that aren't selling with five that do".

The New Organizational Model

Take all of this, and bake it into an organizational model, and you get something that usually looks like a three-part stack.
Slide6 The services group is "known" by its interface to the outside world: a published list of services, with service managers behind each and every one of them.

That services group is then supported by a platforms group that is responsible for designing, building and operating the shared platform behind those services.

Behind that, there's a foundational technologies group with the required deep expertise in particular disciplines as needed: servers, virtualization, storage, networking, security technology, et. al.

Although this specific example is for our Private Cloud Infrastructure Group (or PCIG for short), the same design pattern is being applied to other IT functions, e.g. applications, user experience, data services, etc.

The same three-part model is familiar in each functional instantiation: exposed services from that group, a "platform" that might incorporate services published and managed by other IT groups (e.g. infrastructure), and whatever foundational technologies are unique to that functional area, (e.g. middleware).

It's an important point, so I want to be clear -- the vast majority of published and consumed services using this model are entirely consumed by other internal EMC IT groups vs. directly consumed by non-IT users.  Sure, there are services that are directly consumed by users (e.g. the Cloud 9 self-service infrastructure), but that's not the goal of each and every service.

Key Interactions To Note
The "value chain" if you will, is driven by the services manager(s).

He or she has an eye on how the services are being delivered and consumed, costs associated, shifts in demand away from existing services and towards new services -- like any "owner" would think about things.

The "supplier" is the platforms group.  The service manager is constantly pushing the platform group to do more, do it faster, do it better, do new things.  The platform group, in turn, is motivated to cost-reduce service delivery from the platform, standardize and automate things as much as possible, and so on.  Put differently, the platform group "sells" their capability to the services manager.
The platform group, in turn, relies heavily on the foundation technologies group to be out there looking for cool new technologies that help the platforms group do their job better: newer hardware, newer software capabilities, etc.

The same sort of end-to-end value chain is invoked when there's a problem or issue.  Service manager says "we've got a problem", platform manager investigates, and calls in the foundational technology specialists if needed.

Slide7 More importantly, we're seeing a wonderful flow with "requirements" for the shared services coming down from above (and percolating into the other layers) as well as a steady flow of innovations and enhancements generally coming from the other direction.

All of the interfaces, roles and responsibilities are relatively clear -- at least at a high level.
Just like any supply-chain delivery model :)

End-To-End Supporting Functions
If the "core" of the IT service delivery model is multiple instantiations of this three-part model (services, platform, foundational tech), it's worthwhile to point to a few disciplines that are clearly *outside* of this framework, and serve to support the whole vs. pieces.  IT Finance and HR are obvious examples, as are the Global Operations Center and, of course, the Help Desk.

Perhaps the most important (and interesting) new component of the services-oriented stack is the new Solutions Desk.  Think of it as a front desk for all the other front desks.

The Importance Of The Solutions Desk "Clearinghouse"
Imagine I'm a business user, and I'd like 300GB of capacity for some reason.  I can get in contact with a Solutions Consultant, and share my request.

The answer that's likely to come back is that if I'm willing to accept 250GB at moderate performance and once-a-day backup with four-hour restore, there's a standard service that I can click on the portal, no questions asked.  Immediate and instant gratification.

However, if I'm insistent that I really *DO* need 300GB, and 24 hour RPO / 4 hour RTO isn't good enough, and performance matters, there's a slightly different process for approval and provisioning that's measured in a few days/weeks vs. a few minutes.

Slide8 Of course, the special "service" is still carved from the same shared platform, using the same processes, etc.  It's just not offered as a standard service for easy consumption.

You'd be surprised how many people would take the 250 GB "standard" option to get what they need right now vs. later.

That back-and-forth interaction turns out to be really important.

First, the consumer of this service (presumably me in this example) often doesn't know what the options are -- someone needs to explain them to me -- at least, the first time around.

Second, the service manager who's defined and managing the service is highly motivated to have as much as possible come through his or her "standard" service.

If too many "specials" come through that look similar, that's a strong indication that maybe a new service needs to be created.  The notion of a retailer with services on the shelf is reasonably accurate here.

Going Farther
The concept is turning out to be very extensible in practice.  The individuals who staff it really aren't architects in the classical sense; they're more consultants.

They know what's in the service catalog, and they know what's involved (cost and efforts) in creating new services.  Like any good consultant, they're highly motivated to sell what's on the truck, and do as little customization as possible.

And, of course, for the very big or the very unusual, the process shifts back to a more traditional requirements and planning approach, but -- still -- there's a high proportion of the standard services offerings that comprise the eventual "solution".

An interesting use case arises around remote locations.  Given that we're EMC, and we operate our various business functions around the globe. this comes up frequently.

The services team has come up with two broad flavors of offerings.  It turns out that in many situations, not much IT footprint is needed locally.  Between VDI and WAN acceleration, a "dumb" footprint in the location is becoming more frequent.  Even if there's a server footprint required, there's a standard set of service choices to back it up, monitor it, secure it, etc.

When some lucky EMC employee lands in a new location to set up shop on behalf of the company, they talk to a service consultant who knows the standard remote office offerings, their pros and cons, costs, and makes it very easy indeed for the local requester to simply get things done and move on to the real job at hand.

The same line of thinking has been extended to providing a limited set of standard user experiences (we don't use the term desktop images anymore), whether that be on a classical laptop, or -- more often -- on a mobile device of one kind or another.

For example, if you've requested "iPhone support", there's a standard set of services you're going to want: email, web access to internal applications, etc.  Make it a packaged "service", and everyone wins.

Common Questions
I've now been in more than a few situations where we've put this line of thinking in front of a senior IT team, and there are some common questions that come out.  I thought I'd share the more common questions, and the answers.

Where did you start?  Bottoms up, or top down?
Well, if you start top down, you can't make much progress.  After all, every request of the IT organization is inherently different and unique.  Conversely, if you start from the bottom up, you're simply documenting what you already do at a very granular level.

The answer ended up being "middle out" -- create logical groupings of services (e.g. infrastructure) and start there.  Also, for very logical reasons, we tackled the infrastructure function first, hence the title of the group "Private Cloud Infrastructure Group".

We intend to apply the same design pattern to other service-delivery parts of IT.

Do you do chargeback?  How was this funded?
Every group understands their cost-to-serve for the service to about the 80-90% level.  Sure, there are some allocated costs in everything, and it's not as precise as we'd like it to be.

However, there's enough awareness of costs to have an intelligent discussion with someone who feels they need zero-data-loss availability vs. daily backups.  There is a chargeback model in some areas, but not as many as we'd like.  The belief is that understanding and exposing true costs (e.g. "showback") is a necessary first step towards chargeback models.

The hard part was breaking away from the per-project-funded-by-the-business model that defines so much of IT activity.  We basically had to justify the upfront investment in the creating of shared, pooled services -- and the people and processes to deliver them -- on the promise that they'd ultimately be more cost-effective (not to mention more agile) for the business.

We got that result.

How did you decide on the first round of services?
It was an educated approximation.  Some of the services turned out to be very popular, others weren't.  A lot of requests were ending up to be pretty close to the standard services, which allowed up to tune up the offerings a bit.

Again, it's that continual feedback loop between producer and consumer that results in services that people want.

I'm interesting in your self-service environment, you called Cloud 9.  How did you set pricing, and how do you govern its consumption?
Anyone can get a decent amount of resources on Cloud 9, but only for 90 days -- no exceptions.  That 90 day limit turns out to be very effective in attracting the "right" use cases, and discouraging the "wrong" ones.  If an application or use case needs to be around more than 90 days at the outset, it's probably a different discussion.

It should be pointed out that self-service Cloud 9 resources are made available from the same pool of resources, services and processes that we use for more demanding parts of our business.  It's nothing more than a different consumption model on top of a standard capability.

After much thought, we decided to use Amazon's pricing as a proxy for pricing our internal services.  We thought it better to perhaps slightly subsidize initial consumption so that we could get visibility into what people were doing, provide some basic protection and security, and -- most importantly -- make it very easy to move surviving applications into something more appropriate at the end of 90 days, if it was needed.

How many of your new projects are being consumed off of the shared services catalog, vs. project-specific infrastructure and processes?
A significant majority of our new "projects" (we really don't think in terms of projects in the traditional way anymore, but our clients do) consume the majority of their needs off of either a standard service, or a slightly modified variation.  We're also doing a lot of work to package elemental services together into easy-to-consume "bundles", e.g. compute, storage, data protection, etc. that's roughly scaled together.

Any optimization we might get by individually fine tuning the components is more than outweighed by the ease of provisioning and consumption.  Keep in mind, many of the physical resources are actually virtualized ones: server, thin provisioned disk, etc.

The role of the service managers is to make sure we have a minimum number of the "right" services to cover the majority of the incoming requirements without too much customization or modification.
It's a learning process, but it's going pretty quickly.

How do you keep people from over-specifying or under-specifying their requirements?
That's where the role of the solutions consultant comes in.  If it can't be resolved at that level, there's the usual escalation process between business leaders, just as you'd see with any organization requesting services from another.

The majority of the time, though, things can be resolved at an operational level.  It's mostly a conversation around real requirements and tradeoffs.

Is This New?
No, not if you look outside of traditional IT groups.  You'll see many of these same nservice-oriented IT organizational patterns in the newer IT service providers we're working with.
After all, if you're an IT service provider -- and that's your business -- this is precisely the sort of structure you'd need to effectively deliver IT services that people want.
Or, put differently, many IT organizations are becoming internal service providers -- so they'll have to organize like them, won't they?
Food for thought.

By: Chuck Hollis

Thursday, July 21, 2011

What's In A Name?

Names do matter.  

Especially in organizational settings, what you decide to call your aligned group can have a subtle yet powerful effect on perceptions: both within the organization, as well as how people outside the group percieve your role and intended value proposition.

And if you don't think perceptions matter that much in organizational settings, well ... you're wrong.

How Did This Come Up?
It was a customer briefing, of course, and the CIO had brought his senior leadership team.
The gory details don't matter, but the big picture does: the new CIO had come from elsewhere in the business to lead the IT team through a transformational journey.
How things had come to be was less relevant than that they needed to change, and fast.
During the opening part when the CIO was laying out his game plan, I made the mistake of referring to his function as "IT".

"No", he corrected me, "we've renamed the group IM -- information management".
It took about 10 seconds for that to sink in, and then it hit me: brilliant!
In one fell swoop, he announced to his team -- and the broader organization -- that their real job was maximizing the value of their corporate information portfolio.  The technology used to do this was simply a means to an end, and not an end unto itself.

He also announced to the broader external organization that his team's role and aspirations had significantly changed.

And -- most importantly -- he sent a clear message to the leadership of the entire organization that -- yes -- information matters.

In two words.

Why This Was Important
The existing IT organization had been largely constructed around acquiring and deploying technology assets.  "Tin and links" as one person put it.  Their situation is not entirely unique, by the way :)
As a result, the EMC sales team was focusing on giving them exactly what they were asking for: better tin (e.g. storage) and better ways to use the links.  Tactical, not strategic.  Not ideal, but -- as a vendor -- perfectly understandable: we're programmed to be responsive to customer requirements.

You want to talk about storage, or security, or virtualization, or whatever -- that's what we talk about.
As the conversation progressed, it was pretty clear to me that the new CIO had thought through his game plan at a conceptual level: for example, deliver services, not technologies.  Or being smart about what's done internally, and what can be done externally better, faster or cheaper.  Or investing heavily in the bits that matter to the business, and do "good enough" everywhere else.

Given that their business value proposition centers around a cadre of highly-skilled mobile knowledge workers, it was clear (at least to me) where they'd want to "go long: on IT investment.  So I asked a few questions, and got the answer I was looking for.

A new IT (er, "IM", sorry) team was being formed to look at new forms of collaboration and non-linear workflows.  Another team was looking at knowledge management and re-using intellectual assets.  A new team to think about security and GRC from an information perspective.

They'd found a promising rock star to lead the creating and formation of the service catalog they'd aspire to deliver, and the processes needed to make them better over time.

And so on.  All good.

They Were Surprised By Us
Let's face it -- when you say "EMC" most people immediately think "yeah, those storage guys".  Fair enough -- after all, we've been #1 in the storage market for quite some time.  And, as market brand positions go, I'll take it.  Much better than being #2 or worse :)

When EMC people present, they usually make a big deal that "we're much more than a storage company now", pointing at all the non-storage technology assets: virtualization, security, information management, big data analytics, etc. etc.

Still a technology vendor, though :)  Just more stuff to talk about ...
Like so many IT functions, technology was quickly becoming a means to an end: they needed to transform from classical IT to IT-as-a-service.  They needed to change the way they did things.
Technology alone won't do that for you.

Who would be their vendor-partner for that journey?  
That was the positioning I sought with them.  Please think of EMC as a vendor-partner who can help you on your journey.  We've done it ourselves, and we're currently helping thousands of organizations like yours figure out where they want to go, and help get them there faster, with better outcomes and less risk.

We spent an entire two hours on IT transformation: how to go from silos to services.  What worked, what didn't, where we saw the problems, how people came up with clever solutions. I onlymentioned one bit of technology along the way (Vblock, of course) as an enabler rather than an end-goal.
I don't think they were really expecting that sort of discussion with us.  Part of it was their brand association with the name "EMC".  I guess we have our own perception-changing "opportunities" ahead of us :)

More Names
As I look across newer forms of IT organizations, the names are changing.  Private cloud infrastructure group.  Services management group.  Foundational technologies group.  User enablement.  Global operations.  The old names and traditional silos are getting less popular with every passing day.

And I think that -- in this specific context of organizational change -- names can be powerful things indeed.

 By: Chuck Hollis

Wednesday, July 20, 2011

When Was The Last Time You Updated Your Storage Strategy?

Customer and partner discussions seem to move in seasons.  I'm sure there's some underlying cause-and-effect at work here, but sometimes it's a complete mystery to me.

Vision It seems that -- almost out of nowhere -- there's an notable uptick in requests to have non-trivial storage strategy presentations and workshops.

Not to oversimplify, but I think that IT planners are starting to realize that whole topic of storage is going through a number of rather important transitions.

On the demand side, the evidence is clear: the majority of the organizations we support are seeing unprecedented demand to acquire, store and manage multiple rivers of information.
Put differently: here comes the flood.

On the supply side, there are significant and meaningful changes in how storage is built, how it's operated and how it's consumed.

Sure, there's lots of opportunities to do small things here and there to move along in the right direction: introduce a new technology here, fix a process there -- incrementalism vs. re-engineering.

But, at the same time, I'm finding more and more cases where it's probably time for our customers and partners to sit down and re-envision end-to-end how best to cope with the new world of information-intensive businesses.

Disclaimer(s)
First, keep in mind that I spend a lot of time with bigger (and more complex) environments.  A lot of what I'm about to talk about here isn't applicable if your environment is more modest.

Second, there's a bit of negotiation that goes on in these sessions.  There's always a list of topics that the audience is interested in as a starting point.  But I also think there's a list of topics that the audience *should* be interested in, even if they say they're not :)

I also have to point out that this is how *I* personally tackle the topic as an individual here at EMC.  Although you'll find a preponderance of EMC thinking here, I wouldn't consider it the final and official word on this topic :)

Finally, this is not happy-face marketing.  These are serious discussions, often with not entirely pleasant implications for all involved.  Better to get that out earlier rather than later.

A Quick Diagnostic
Not every IT organization is in this sort of pain.  But there are quite a few where you can see it visibly hurts, and will only get worse unless a holistic approach is used.
Ask yourself these questions ...

What's going on in the business?  Has there recently been a rapid surge in the amount of information that needs to be stored or retained?  New businesses?  New, large applications?  New regulations that mean keeping more stuff around longer?

How is the storage team doing?  Happy, productive IT professionals?  Or perhaps they're getting frustrated, and there's some turnover?

How are the people who interact with the storage team doing?  Server team, virtualization team, app team, even exec IT management -- are communications and processes reasonably productive, or is there frustration in the air?

How's your story?  Can you communicate your approach to storage using a handful of images?  Services delivered, processes used, key technological elements?  Or is it more like a bunch of stuff with a lot of ad-hoc processes around it?

If you aren't familiar with the frog-in-boiling-water analogy, it's useful here.  Urban legend says that if you throw a live frog into hot water, it will immediately detect the situation and take corrective action.  However, if you put the frog in cool water -- and slowly heat it -- the frog will never detect its peril, and eventually get cooked.

When there are highly visible crises of one sort or another, people can easily be galvanized into taking meaningful action.  When it's a death of a thousand cuts, it's a lot more difficult to get the party started.
So, Where Do I Get Started In The Discussion?

A while back, I did a "mind map" of all the relevant topics that one might have in a "storage strategy" discussion -- diving down a bit, but not at an overly technical level.
The list of high-level topics alone went to six pages :(

Obviously, there's a lot that could be potentially be covered, but how do you organize the material so that you can have a meaningful dialog in, say, 60 or 90 minutes?

Keep in mind, everyone is coming from a different starting point.  And, unfortunately, this stuff is way too familiar for me, so I can occasionally breeze over important points that I naively assume that everyone knows about.

So here's how I'm currently organizing the discussion:

Context setting, both in the industry and in IT.  
Examples of industry context topics: a quick chat about expected information growth, shift in information types, new constraints on acquiring and using information, and so forth.  Most technologists don't like to spend much time on these topics, but -- again -- a bit of big-picture is something I feel most technology companies don't spend enough time on.

Examples of IT context: pervasive virtualization, advent of ITaaS models (think "cloud"), new demands for speed and agility by the business, and so on.  I want to make it clear that we see storage as only part of an end-to-end IT service delivery capability, and not an island :)

Key technological shifts in storage.
I don't want to argue the precise timing and status of these shifts, only that they're happening, and they're having an impact on how storage stuff gets built today and in the future.

As obvious examples, consider the shifts from tape to disk, and from disk to flash.  Or the widespread use of industry-standard components vs. more proprietary hardware.  The inherent appeal of scale-out storage architectures as information growth continues to outrace Moore's Law.  Converged storage networks.  Or storage value and innovation increasingly being expressed as software vs. hardware.
Nothing too controversial here -- all the usual stuff that most of us storage junkies sort of accept as the articles of faith.

I do throw in a few curve balls to make people think, though.  One area I spend some time on is the importance of metadata in managing information, and how different forms of storage (block, file, object) can leverage metadata to differing degrees.

That's usually worth a bit of head-scratching, especially in content-rich environments.
The other area I spend a few moments on is "information logistics", basically getting the right information to the right place at the right time over nontrivial distances.  As IT delivery models get more global, this is becoming far more interesting to more people.

Changing IT Management Models
If the technology is going to radically change, how we're organized to use it is going to have to change as well.  One of my personal rants is that we -- as producers and consumers of IT technologies -- don't spend enough time around how we organize to consume the technology (and deliver services) vs. endlessly debating the merits of one technology vs. another.

My default reference model here is simple: storage services delivered in the context of other IT infrastructure services, with monitoring and control exposed upwards to other consumers of the "service".  Sure, there are a variety of XaaS-ish models I've seen in larger IT organizations that work; the point here is to start thinking in that general direction.

Sometimes, I get an audience that is a bit defensive on this point.  That's understandable.  If I feel they're up for a little pushback, I ask them to explain the end-to-end process between a business user needing storage capacity, and them actually getting it.

Or how storage service delivery is continually measured for process improvement :)  Again, this stuff isn't exactly rocket science, but it *is* a shift in perspective for many.

More importantly, your target organizational and management model will drive a very specific agenda in storage management tools: old school vs. new school thinking.

Data Protection, Replication and HA
Although there are interesting discussions that can be had at a topic-by-topic level (e.g. what's new in replication?)  I'm finding more traction with putting all the "keep bad things from happening" topics on the table, and discussing them as a continuum -- mostly, since the lines are blurring fast.

For example, I can't see a clear line anymore between accelerated disk-based dedupe backup and continuous data protection (e.g. application journalling) -- you're just going for more frequent points-in-time at one level.  Ditto for HA -- server and application failover is highly dependent on the data being available and usable.

Besides, the interesting parts of these discussions for me aren't the individual technologies, it's more about the integration points (VPLEX Geo, anyone?) and how management tools can turn the technologies into a "protection service catalog" that can be exposed to other entities in the organization.

And, if you think about it, that's sort of what most organizations want: a set of standardized (and integrated!) protection services that they can consume without worrying too much about the details.
Archiving And Tiering

I suppose this is an updated version of the old ILM (information lifecycle management) discussion, perhaps a bit more pragmatic in its current incarnation.
I try to make a couple of key points here to drive the thinking.

First, when it comes to archiving and tiering, there's a big different in what's achievable if you have (or don't have) metadata to drive policy behavior.

An example of mostly-metadata-free tiering approach would be EMC's FAST -- any knowledge we have about the data is externally supplied, and isn't intrinsic to the information itself.

There's a lot we can do -- even without metadata -- to boost performance and lower costs, but much more is possible if we've got metadata (associated directly with the information) telling us what to do.
An example of a metadata-rich archive might be email using something like SourceOne -- all sorts of rich policies can be generated simply because the data itself gives us some big clues on how it wants to be handled.

And, if you've got a big archiving/tiering challenge, maybe you ought to be thinking about generating useful metadata to make the automated policy management more tractable.

The second point I try to make is that archives have this sneaky way of assuming new roles over time.  The archive that was built to reduce storage costs is now asked to do compliant retention.  Or the archive that was built strictly for low-cost compliant retention becomes yet another information source for knowledge workers.

Once you have this ability to store non-trivial amounts of information all in one place, it's amazing what people might want to do with it :)

The final point I try to make is that the number and quality of external storage services is growing very quickly indeed.  Just like in the physical world, long-term storage of digital assets may be something you'd like someone else to do on your behalf.

Not that there's a perfect solution out there for everyone, I just think it's something you should increasingly be open to.

Securing Data At Rest
It's rather difficult to isolate out a storage-specific security discussion -- simply because it's only a small part of the entire security ecosystem, but there are a few worthy topics that sometimes people are interested in.

One, of course, is the whole topic of encryption and key management, especially over the lifecycle of information -- backups, archives, etc..  The choices there are pretty clear, so really not much news there.

More interesting to many is the notion of securing multi-tenant environments -- not only protecting tenants from each other, but protecting them from the administrators as well.  This latter topic goes into role-based credentials, audit logs, event monitoring and so on.  Again, storage is just one part of the security ecosystem and has a role to play.

Finally, most security-aware environments need detailed GRC reporting: monitoring how the storage infrastructure is configured, administrative events, and so on.  Lots to talk about there, but it gets very detailed very quickly.

Specialized Topics
Even with this generic framework, there are lots of interesting topics that don't really fit neatly into these standardized buckets.  Yes, there's more.  Much more.

One obvious example is advanced VMware integration: what's there, what's coming -- and how are people using it effectively?  Perhaps more storage integration innovation is going on in that domain than any other.  And there's more coming -- much more.

A less-obvious example might be all the similar integration work we're doing with other targeted environemnts: Microsoft and Hyper-V, Xen and KVM, Citrix, Oracle, SAP, mainframe, iSeries, etc.  Yes, I know, we tend to talk a lot about VMware integration, but the actual integration story is far broader than most realize.

Project Lightning got a lot of press at EMC World, so I'm always putting that one on the table -- simply because it's very reflective of many of the underlying forces going on in the storage and broader IT infrastructure world.

If a lot of the audience's business involved shovelling content around the globe, I try and make time for the cloud object model as implemented by Atmos.  It's not for everyone, but -- with every passing year -- more and more IT professionals are looking for a solution in this area, so it's worth a mention.
Along the same lines, I try to share our thinking around storage for big data applications: big data analytics (think Greenplum and Hadoop) as will as file-based workflows and applications (think Isilon).  Both are very different ways of thinking about storage at scale, and usually outside the mainstream of enterprise IT applications.

Occasionally, there's some interest in VSAs -- virtual storage appliances -- basically, storage software stacks that emulate traditional hardware-based storage devices.  While not exactly ready to solve the world's problems, they are very emblematic as to where the industry is heading: storage as virtualized software stacks that are pushed on your shared infrastructure pool.

And, yes, there's occasionally some interest in the whole consumer/SMB storage space, as embodied by what our friends at Iomega are doing.  There's some cool stuff there, and it's usually very interesting to share just how far these folks have pushed consumer tech upwards into progressively more interesting market segments.

Service and Support Topics
There's still strong interest in topics like how we do interoperability testing in eLab (still a valuable service), or how we qualify hardware components (disk drives still fail, but less so), or how we interact with customer environments to provide customer support services -- especially in demanding environments.

There's also growing interest in professional and consulting services -- everything from help in formulating your own storage strategy, to designing/staffing/training the storage organization, to managed residencies and storage as a managed services.  It's not a pure technology discussion, especially in real-world situations.

In each of these areas, there are interesting shifts and trends that are worthwhile to point to, especially in a planning context.

Customer Recommendations
While every customer or partner situation is different, there are some generic food-for-thought recommendations we can make, e.g. start thinking about storage as a service vs. technology stacks, evaluating one or two of the interesting newer technologies to see if it makes sense in your environment, and so on.

I think it's worthwhile to give people at least a few high-level recommendations to take out of one of these sessions.  We cover a lot of material, and it's nice to summarize at least to some degree.
Coming up with a specific, detailed and justifiable storage strategy -- well, that's usually a consulting engagement.

Your Feedback?
So, that's usually the framework I personally use when asked to do a non-trivial "storage strategy discussion".

Obviously, there's a lot there, but -- that's the point -- there *is* a lot there, and it's all food for thought to some degree or another.

That being said, I'd be interested in any feedback you might have as to the approach here -- is it useful, could it be made better, are there important topics being missed?

One of the things that I feel fortunate about is that -- working for EMC -- there's a lot to talk about.
We don't have to force-fit one or two products into every situation we encounter.  Sure, we end up talking about various EMC products and technologies, since those are the tangible examples of the thinking behind the strategic view.

As far as I'm concerned, the only reason we invest in these extended briefings is to help people get out of the day-to-day, take a moment to look ahead, and to start to plan for the future.
Because, when it comes to storage, the future is coming very fast at us indeed :)


By: Chuck Hollis

Tuesday, July 19, 2011

The Importance Of Friction When Considering Cloud

As I watch the industry talk-track around cloud and IT-as-a-service slowly evolve, I'm starting to get a bit ticked off.


I think in many cases the various industry cloud pundits may be doing people a disservice.
They're a passionate bunch, for sure, but I think -- in some cases -- they're losing sight of a few important real-world considerations that have absolutely nothing to do with technology, and everything to do with how people consume shared resources.
If I think back, I've perhaps been as guilty as anyone, but I've seen things in a new light for quite a while now.

My Rant?
The industry talk track on cloud and IT-as-a-service model has generally evolved about making IT easier to consume -- in essence, removing various forms of friction and inefficiency for those providing the services as well as those consuming it.
But everything has its limits.

Less friction?  Good.  No friction?  Not good.

Here's why ...

The Basics
I've now been fully engaged in this whole cloud thing for about three years here at EMC.  The talk track inside and outside of EMC continues to evolve and mature, but -- for me -- it can't happen fast enough.  There's a lot of progress that's been made collectively, but we still have work to do.
One good example of evolution is the discussion around "cost".
If you'll remember, the original cloud discussion is "you want cloud because it's so much darn cheaper than everything else".  Well, maybe yes, and maybe no -- depending on the specifics of the situation.
Certainly, there's usually a strong case that can be made, but it's not a uniform statement.  And there are always non-trivial costs and effort to get to that envisioned state.

Going farther, taking various forms of cost out is just table stakes to so many of us business consumers of IT.  Sure, we like cheap, who doesn't?  But what most of us really lust after is speed, flexibility and agility.  Give us 80% of what we want in a very short time frame, and we'll debate the other 20% later, thanks.

But -- as business consumers of IT -- we can be a selfish bunch.  We tend to focus on what we individually want for our pet concerns.

Not that we're completely insensitive idiots, it's just that we expect other folks to be looking out for The Big Picture.

Anything that's easy to consume will be consumed more -- that's human nature.  Inevitably, ease of consumption leads to a well-understood "tragedy of the commons".  In one sense, this is not a new problem for humankind -- instead of grazing pastures, we're now talking about
the modern equivalent: shared and pooled IT resources.

Hence a strong interest in newer forms of friction (or governance) that makes IT production and consumption easier to do, but still preserves and maximizes the value of the shared resource for all.

Thinking About Friction
Friction We're all familiar with the concept of friction -- we see examples every day.  Those of us with an engineering bent tend to see friction as something to be minimized -- it's overhead, it's resistance, it's the quintessential inefficiency.

Even in our personal lives, though, a little friction is a good thing.  For those of you who routinely brave cold winters, friction
becomes important when we're driving or walking outside.

A zero-friction zone is a bad experience waiting to happen.

Which brings up an interesting question -- as we progressively engineer the friction our of our IT environments, how should we think about the "right" places to leave a little friction in place -- at least, until we get more comfortable with the new operating model?

From a purely technological perspective, we can now engineer IT production and consumption environments that have near-zero-friction.  Our immediate IT resource whim can be instantly satisfied, sometimes automagically based on external criteria.

And, when discussing this with IT thinkers, I make the argument that retaining friction in a few key areas is probably a *good* thing.

The Consumption Example
An IT organization stands up their first self-service environment.  Because there is substantial unmet demand, and almost zero friction assumed with consuming, it gets immediately and completely consumed.

Did the "right" workloads end up on the new environment?  Would some of the workloads be better served by a different environment, or consumption model?  What happens when a new (and worthwhile) workload comes along, and the resource is fully allocated?

Or, perhaps a bit more relevant, some well-intentioned but somewhat clueless individual puts up a workload that really shouldn't be there for security or continuity reasons?

Having a combination of realistic policies and human oversight isn't necessarily a bad thing when considering *any* shared resource -- especially at the outset.

The Production Example
Even if you've done a good job of controlling incoming demand along the lines above, removing all friction at the back end creates similar problems.

It's not hard to imagine the erstwhile VMware administrator merrily provisioning virtual machine after virtual machine  until they eventually exhaust some non-server resource such as network or storage or even licenses.  Not that the VMware admin (or whoever) is a bad person; they just have never had to thought about *all* the resources they're consuming, rather than just the stuff they usually work with.
Again, having a combination of realistic policies and human oversight makes sense even for entirely-within-IT consumption models.

But how do you get that "right balance" of optimization without making the whole process burdensome for everyone?

I am not claiming to have the perfect answer for any and all situations, but -- over the last handful of months -- I've picked up some tips and tricks that others are using around these issues.

It's A Journey, Not An Event
More than a few organizations have fell into the trap of designing the "perfect" process to govern the consumption of resources.  Personally, I think this is a fool's errand.

For starters, any process or policy engenders a reaction as people figure out how to use it.  Sometimes those reactions are predictable, very often they're not.  I've seen that it's better to think in terms of an initial approach, and then frequent updates as experience is gained -- often settling out into an equilibrium before too long.

Context changes as well: new requirements, new constraints, etc.  Like the CFO mandating a complete freeze on IT expenditures for the next three months.  Processes and policies that can be quickly changed and communicated are far more useful than ones that can't be.

More to the point, the best policies and processes are built on experience.  The mindset ought to be to find a useful starting point, and continually enhance the approach.

People In The Process Can Add Value
So much focus seems to be put on achieving nirvana: complete and total automation of each and every IT process.  While that's a notable (and completely theoretical) goal, having reasonably smart people in the loop -- armed with efficient processes -- appears to be much more desirable.

A good example might be provisioning a secured application environment.  While it's fine to advertise the capabilities of the secure service, I for one would be interested in having a real, live conversation with anyone who intended to use it.  Having someone in the workflow who contacts the requestor and asks a few questions would be a good thing.

Once the decision was made to go ahead, automating the provisioning and monitoring of the supporting capabilities -- sure!

Chargeback Isn't A Complete Answer
Just because someone has money to spend doesn't mean that they're necessarily 100% informed as to the various tradeoffs between the services.  As service catalogs get progessively richer and easier to consume, there's an associated skill required to be a knoweldgeable and proficient consumer.
I remember trying to figure out which was the right mobile phone plan for my family several years ago.  There was me, my wife, and two teenage kids who really liked to text a lot.  There was the fact that we travel occasionally.

To make matters interesting, the service provider has at least 20 different plans, options and sub-options for me to go thoughtfully consider.

I had money to spend, but I didn't know what the heck I was doing.  A bit of friendly advice would have really helped at that particular moment :)

A Few Practical Examples
My good friends within EMC IT are wrestling with these very issues, and they've done a number of pragmatic things to create a bit of friction in the process while gaining experience on the new consumption dynamics.

For one thing, a fair portion of the self-service cloud ("Cloud 9") has a standard 90-day window.  That means that after 90 days, your stuff automagically goes away.  While not ideal for every use case, that sort of restriction goes a long way to positioning the internal service for transient needs vs. ongoing requirements.  It's highly unlikely that someone's going to put up a sensitive workload on a virtual machine that's only going to exist for three months ...

Another practical example comes from EMC IT's "front desk" or solutions office.  People wanting IT infrastructure call in, and discuss their requirements with a "solutions consultant" who's familiar with the current internally-available service catalog.

Although all services are designed to be potentially consumed in an on-demand manner, they're only available on request from the solution consultants.  Once the decision is made to go, everything is highly automated.

I spoke to one customer who'd done something interesting -- although a bit unusual.  The business people still thought they were buying physical servers and infrastructure -- the processes they'd been using for years were still largely intact.  Except, once within IT, they were carved from a shared virtual pool.  The user-visible management tools showed what looked to be physical resources -- except they weren't.

Rush jobs, changes in specifications, cancelled projects, etc. -- didn't result in any stress from the IT team.  The "friction" in this case belonged entirely to the business -- specifying their requirements, creating justification, getting funding, etc.  The IT guys just built a largely frictionless environment to satisfy physical requests.

Clever.

Organizing For Success
If you look inside of organizations that are seriously doing this stuff, you'll find a different set of organizational constructs.  My best example comes from EMC IT, but I've seen it elsewhere.
As-markets-equilibrium-price_clip_image001 IT services (whether consumed externally to IT, or internally consumed by other parts of IT) are defined and delivered by service owners.

Storage as a service, network as a service, VM as a service, infrastructure as a service, etc. etc. are all owned by individuals that see themselves as capitalists selling to an internal audience.

More advanced services are built by composing underlying services.  Eventually, those services are exposed in such a way that a non-IT person sees them.  But the lines of accountability are clear.

In these models, each service manager is
responsible for balancing between supply and demand.  More importantly, they are the "friction points" (really "market makers") layered over automated delivery mechanisms.  If storage service manager is seeing too much demand for a certain kind of storage service, he/she can change policies and/or internal pricing to bring supply and demand more in balance.

Conversely, if no one wants the storage services being offered, you've got the wrong storage services and perhaps the wrong storage service manager.

As a matter of fact, you can see this concept of a "service owner" at multiple locations in the IT stack -- including service owners who directly face business users.  Is it perfect?  Hardly.  Does it work pretty good where I see it?

Yes.

Back To My Rant?
Yes, cloud concepts are wonderful things.  The promise of delivering a wide range of IT services that are faster, cheaper, more flexible, etc. -- all real and tangible, and everyone wants them.  No question that the industry is moving in that direction, and fast.

That being said, in the process we're creating pools of shared resources that can be consumed on a moment's notice, e.g. a potentially frictionless environment.

Maybe the demand for IT resources is infinite, but supply certainly is not.

And I, for one, would like to see more discussion from the industry clouderati around how to engineer some well-considered friction into customer environments, otherwise ugly things are certainly going to happen.


By: Chuck Hollis

Monday, July 18, 2011

Survey: People Know Online Risks But Often Ignore Them

Surveys are a great window into people’s minds, especially when they can illuminate contrasting, and even contradictory, behaviors in the same group. Results from the Symantec Online Internet Safety Survey have done just that. The most compelling finding—that respondents frequently proceed with online transactions they know might be insecure—inspired me to ask not just, “What are they thinking?” but “What are they thinking?!?”

The survey’s focus must be on many people’s minds, as we’ve had an extraordinary response: 301 people in just a few days! My initial impressions of the results are below. Feel free to share your comments and questions on the original edition of this post.

Findings

Risky behavior remains common despite respondents knowing better
What struck me the most was that in many cases respondents continued online transactions even when those transactions lacked security cues respondents knew should be there. For example, 80 percent of respondents knew to look for the padlock icon signifying Secure Sockets Layer (SSL) encryption, but only 55 percent said they would abort a transaction if they didn’t see it. Similarly, 81 percent knew to look for secure Internet connections (HTTPS) but only 56 percent got spooked by secure URLs not matching certificate domains (not an exact correlation, I know, but related). These are differences of nearly 30 points! What is driving this reckless behavior?

An equally notable figure is that 15 percent don’t use secure connections for social media activities even though they know improved security is available. Come on, people!  

People know to bail out of online transactions they suspect aren’t secure
Exactly three out of four (75 percent) of respondents have abandoned online transactions because they felt the website wasn’t secure. This figure affirms respondents’ understanding of security cues and isn’t surprising given respondents’ high sensitivity to data loss. In fact, I’m wondering why the figure isn’t higher, closer to the high 90s like in Questions 1 and 2 (see below). Why would a quarter of respondents not cancel such transactions? Do they only go to websites they trust? And how do they know that trust is warranted without those security cues?

Many people are still learning about new browser security cues developed to counter evolving threats
The majority (55%) of the respondents knew to look for a green address bar—the sign of a website having an Extended Validation Secure Sockets Layer (EV SSL) certificate. More than half of respondents (54 percent) knew a green address bar means a website is secure and only one percent said it didn’t make them feel safe. In contrast, nearly half (46 percent) either didn’t remember seeing the bar or didn’t feel either way about it. These figures indicate that popular understanding of the value of the green address bar is growing, but this new security feature is still not top of mind for many users. Perhaps businesses can help educate their users about their use of the green bar, where applicable. If you need help with that, there are great resources available at the VeriSign Authentication Services site.

Moreover, 42 percent knew to look for a third-party trust mark or seal. In fact, one in three (35 percent) respondents said lack of a seal worried them enough to end an online transaction. These figures may indicate most people don’t yet understand how seals represent an important security guarantee. Think about that for a moment. There is a potential for online businesses to be having a third of their businesses not transacting simply because the site lacks a recognizable trust mark to encourage users the site is safe.

At the same time, more than four out of five respondents knew to look for the padlock icon and/or the “s” in the HTTPS in the URL address of a website (80 percent and 81 percent, respectively) which is not too surprising, since users have been conditioned over the years to look for these traditional cues. A vast majority of respondents know the value of secure connections (HTTPS) and how to use them—77 percent set their social media security tools to use secure connections whenever browsing or logging in.

Nearly everyone has armed themselves with knowledge about security, but room for improvement still exists
Nearly all respondents (97 percent) considered themselves either somewhat or extremely knowledgeable about keeping their confidential data safe when shopping or banking online. The breakdown here was much more even, with 54 percent saying they were extremely knowledgeable and 43 percent somewhat knowledgeable.

Keeping confidential data safe when shopping or banking online is a universal concern:
Ninety-eight percent of respondents were either somewhat or extremely concerned. What’s telling is that 82 percent were extremely concerned and only 17 percent somewhat concerned. That means more than four out of five respondents see protecting their data as a top priority.

This data ties into other findings that phishing attacks are widespread but not always recognized as a threat. More than one out of seven respondents (16 percent) said they had been phished, highlighting how endemic cybercrime is today. Five percent of respondents, though, had no idea what phishing attacks are—a dangerous blind spot. Think you know what a phishing site looks like? Play our Phish or No Phish game to see if you can tell the difference.

That wraps up my first take on the data. Thanks again to everyone taking part in the survey.

By: Ryan White

Friday, July 15, 2011

A Look inside Targeted Email Attacks

The number of targeted attacks has increased dramatically in recent years. Major companies, government agencies, and political organizations alike have reported being the target of attacks. The rule of the thumb is, the more sensitive the information that an organization handles, the higher the possibility of becoming a victim of such an attack.

Here, we’ll attempt to provide insight on a number of key questions related to targeted attacks, such as where did the malicious email come from, which particular organizations are being targeted, which domains (spoofed or not) sent the email, what kinds of malicious attachments did the emails contain, etc. Our analysis of the data showed that, on average, targeted email attacks are on the rise:


Figure 1. Targeted attacks trend
 
Origin
For this analysis, we first looked at the origin of the email messages. The emails were launched from 6,391 unique IPs across 91 different countries, spread throughout the world. Based on the representative set of data we have, below is a regional breakdown of email-based attacks:


Figure 2. Malicious email origin, by region
 
General Targets
Now, we ask ourselves, which sector is the most likely target of these attacks? Below are the top 10 most targeted types of organizations, derived from the domains that the emails were sent to:


Figure 3. Malicious email attack targets, by industry

Three out of the top 10 are governmental agencies. Among the remaining seven organizations, four have strong ties to either local or international governmental bodies. Two organizations (in sixth and tenth position) are not under governmental control; however, their business operations are heavily regulated and may be influenced by governmental organizations.

Governmental organizations are obviously targeted for their politically sensitive information. But why target NPOs and private companies? It’s a foot-in-the-door technique. By compromising those companies with strong ties to government agencies, attackers may acquire contact information for government personnel and craft their next attack around that stolen information.

In one particular organization, ranked 7th on our most targeted list, we observed the following:

•    Forty-one people received 10 or more emails, making up 98% of the total attack emails sent to that organization.
•    The remaining 2% of emails were targeted at 13 others, resulting in an average of less than two emails per person.

This clearly indicates that certain individuals are targeted more than others, probably because of their profile or particular status within the organization. In this organization, the President, Vice President, Directors, Managers, and Executive Secretary were all targeted. All of their profiles—including email addresses and job titles—are publicly available, which is most likely how malicious attackers got hold of their information in the first place.

Having said that, targeting the top-ranking personnel in an organization is not a “must” for attackers; often, targets are likely to include P.A.s as well as I.T. staff (who often have administrative rights on the target infrastructure). Once the attacker successfully infects or compromises one machine in the organization, they then have the potential to compromise other machines or devices on the same network. This may enable the attackers to harvest further contact information (belonging to other organizations) along the way, which leads to future attacks against different entities—the attackers just need that initial foot in the door.

Specific Targets
We’ve looked at the sectors that are the most targeted in our email collection, but what of the individuals themselves? The following graph shows the number of email messages that the top five targeted email addresses (not domains) received over the past two years:


Figure 4: Volume of email received by the top five targeted email addresses

All victims experienced regular spikes followed by a remission. This essentially means that if a user does not receive malicious emails at a particular point in time, he or she will probably receive them sometime in the near future. Perhaps the attacker is lulling the user into a false sense of security in an attempt to strike when his or her guard is down.

In the below graph, victims 3 and 4 belong to the same organization. We can see that they share a very similar trend regarding the timing and volume of emails received:


Figure 5: Victims 3 and 4 belong to the same organization

This suggests both victims were targeted by the same attacker, probably for the same reason. The next graph shows the distribution of the number of email messages versus the number of users:


Figure 6: Email distribution compared to number of recipients

There were 23,529 users who received 10 or less emails, but you would think that the majority of the emails were received by those recipients. Interestingly, the top 833 recipients who received 11 or more emails account for 30.55% of the total attack emails, while the bottom 23,529 recipients account for 69.44%. Again, this shows that a small fraction of the total recipients (3%) received a large portion (one third) of the total emails sent.

Filetype
We have identified the most targeted organizations and looked at some of the individuals in those organzations. Now, let’s identify the most common file types used for malicious attachments in email-based attacks.


Figure 7: Top 10 attachment file types

PDFs lead the way, followed by Microsoft Word’s .doc format. Somewhat surprisingly, executables (.exe) make up almost 10% of the volume. Most organizations block executable attachments at the gateway, for good reason, so this would seem to be a fairly poor choice by attackers. PDFs and MS Word attacks usually follow two distinct patterns of infection: by exploiting vulnerabilities in the application or by a malicious file embedded in the document. Both methods require the document to be opened by the recipient. The former can be prevented by applying patches as soon as they are released, and the latter can be avoided by education and awareness.

Looking at the top 10 malicious emails sent to the top 10 most targeted organizations shows:

•    Ninety-five percent of the time, 10 or more emails are sent to the same organization.
•    Approximately 60% of the emails sent to those organizations appeared to be tailored (more attractive) for each organization, hence increasing the chances of those emails being read and their attachments opened.

What this tells us is that attackers didn’t target specific individuals within an organization; rather, emails were dispersed to maximize the chances of infiltrating the organization. This is a clever move by attackers. As is repeatedly indicated throughout this blog, a foot in the door is all that is required to inflict further damage to the target.

Conclusion
So, now that we’ve looked at some of the trends apparent in this relatively large subset of phishing emails, it must be pointed out that none of the emails in question actually made it to the intended targets because they were, of course, blocked by Symantec.cloud technologies.
In summary:

•    On average, targeted email attacks increased during the two-year period we looked at.
•    The more sensitive the information that an organization handles, the higher the probability of becoming a victim of such an attack.
•    The government/public sector is the most targeted industry.
•    A small percentage of people receive the bulk of the emails.
•    The attachments of choice are .pdf and .doc, making up a combined 67% of all targeted email attachments.
•    Some targeted attacks can be extremely well crafted and quite convincing.
•    Certain organizations and companies make for more attractive targets than others.
•    The people who work for these “higher value targets” need to take extra special care when dealing with emails that contain attachments or links.

If you receive an email that you think is suspicious, err on the side of caution and ask your I.T. department for assistance before you click.

If you would like to obtain additional information on email trends and figures, Symantec Inteligence reports are freely availlable in PDF format at http://www.symanteccloud.com/globalthreats/overview/r_mli_reports.

By:  Shunichi Imano