Thursday, July 26, 2012

What Are The Risks Of A Lost Or Stolen Mobile Device?

Have you ever thought about what would happen if you lost your mobile phone? These days we rely on our mobile phones more than ever. For a lot of us, it can also be a nightmare if it’s lost, stolen or hacked, especially since today it’s become our most personal computer,

But despite the fact that 1/2 of of us would rather lose our wallet than our mobile phone, only 4% of us have taken steps to protect our mobile device with security.

For most of us, our first reaction when we lose our wallet is I have to cancel my credits cards, get a new license, etc. When we lose our phones, we think about the pain and cost of replacing the device. But that’s just the tip of the iceberg.

We don’t realize that our photos, emails, text messages and our apps can be an open door for thieves into our personal information, privacy and financial accounts.

And the time to replace your smartphone and its contents can consume as much as 18 hours of your life.

Mobile devices are on the move, meaning they can more easily be lost or stolen and their screens and keyboards are easier targets for “over the shoulder” browsing.

Take time to protect your mobile device. Here’s some tips to keep your mobile safe:

  •     Never leave your phone unattended in a public place
  •     Put a password on your mobile and set it to auto-lock after a certain period of time.
  •     If you use online banking and shopping sites, always log out and don’t select the “remember me” function
  •     Use mobile device protection that provides anti-theft which can backup and restore the information on your phone, as well as remotely locate it and wipe data in the case of loss or theft, as well as antivirus and web and app protection.
By Robert Siciliano

Robert Siciliano is an Online Security Evangelist to McAfee. Watch him discussing information he found on used electronic devices YouTube. (Disclosures)

Wednesday, July 25, 2012

Phisher Use Olympic Lottery Scams For Summer Games

Fishing, of course, is the sport of tossing a baited hook into the water and then patiently waiting for a fish to bite.

Phishing is similar. The cybercrook sends out spam email and waits for a victim to take the bait. A phisher can send thousands of phishing emails a day, and eventually some people will get hooked.

Phishing is a multi-billion dollar business. Unlike the ongoing depletion of the ocean’s fisheries, there are still plenty of people out there to phish. Today, many victims in developing nations like India and China have only recently gotten broad­band Internet access, and are considered fresh meat by the bad guys.

Phishers follow a similar editorial calendar as newspaper and magazine editors, coordinating their attacks around holidays and the change in seasons. They capitalize on significant events and natural disasters, such as Hurricane Katrina, the Japanese Tsunami and the swine flu. On their radar right now is the 2012 Olympics.

Francois Paget, Senior Threat Researcher at McAfee discovered numerous emails combining scam lotteries and the Olympics. Like chocolate and peanut butter these two topics go great together.

“These mails inform the recipients that they have won a substantial amount of money. After contacting the lottery manager, the victims of these rip-offs will be asked to pay “processing fees” or “transfer charges” so that the winnings can be distributed. In some cases, the organizers ask for a copy of the winner’s passport, national ID, or driver’s license. With that personal information compromised, future identity theft activities are guaranteed.”

Awareness is the best way to avoid being scammed. Knowing what the bad guys are doing to hook their victims and learning not getting caught is your best protection. Here’s  a video that explains what phishing is and how to detect if an email is phishing. You should also be aware of phishing when reading emails on our mobile phone. For more information about mobile phishing, read this.
  •     Invest in security software that includes antivirus, anti-spyware anti-phishing and a firewall.
  •     Never click links in the body of an email unless you are 100% sure it’s legit
  •     Don’t go snooping around your spam folders opening emails that look suspect.
  •     When in doubt, delete. Like mom said, if it’s too good to be true, it is.
By Robert Siciliano

Robert Siciliano is an Online Security Expert to McAfee. See him discussing identity theft on YouTube.

Tuesday, July 24, 2012

NFC at the Summer Games Could Be Exploited

NFC is an acronym for near field communication, a wireless technology that allows devices to talk to each other. In the case of a mobile wallet application, those devices would be a mobile phone and a point of sale device at a checkout counter.

Visa is testing out its NFC service PayWave contactless payment service at the Summer Olympics in London. Every athlete will get a Samsung Galaxy SIII phone enabled with near-field communication (NFC) along with Visa’s payment app.

NFC can be used in other ways beyond credit card transactions. It can integrate with hardware, such as your car, to unlock a door. It can activate software.

Soon enough, using your phone as a credit card will be commonplace. Mobile contactless payments, in which you pay by holding your phone near the payment reader at the register, are expected to increase by 1,077% by 2015.

All of this is good and well, however, there are security issues with NFC that still need addressing. McAfee researchers point out a scam called “fuzzing the hardware”, which involves feeding corrupt or damaged data to an app to discover vulnerabilities. Once such vulnerability is found, the attacker must research and develop an exploit to perform various attacks (e.g. steal credit card info. export the data to the attacker, leak credit card info to any requester). The attacker will then need to find a method to have the victim run the exploit. This entire process costs attackers and criminals in time and money, which can be justified in the case of NFC enabled phones and a multitude of stores with card readers.

McAfee discovered exploitable vulnerabilities on Android and iOS phones. If someone has NFC turned on, an attacker in close proximity can pick up every signal to gather private information or payment information on an athlete’s device.  It is almost like pick pocketing, but they don’t even have to touch you.

McAfee researcher Jimmy Shah stated an attacker wishing to target the Samsung Galaxy SIII devices at the summer games can purchase one easily and use the researcher’s data to help find vulnerabilities and eventually develop exploits to steal a victim’s credit card. The large number of readers at the Olympics will provide places where a successful attacker can use stolen credentials to make purchases.

Users can protect themselves by obtaining apps from the Google Play Market, Amazon’s Appstore, or their carrier’s app store, avoiding 3rd party stores that may have pirated or maliciously modified software. Reviews from other users are also helpful in determining safer apps.

NFC handsets are set to increase to about 80 million next year. Gartner estimates that that 50% of Smartphone’s will have NFC capability by 2015. Pay attention to what’s happening in the world of NFC, mobile payment and mobile security  because before you know it, your wallet will be your mobile phone.

By Robert Siciliano

Robert Siciliano is an Online Security Evangelist to McAfee. See him discussing identity theft on YouTube. (Disclosures)

Friday, July 20, 2012

Workload Mobility Is More Real Than You Might Think

One of the many holy grails in data center architectures has been the notion of workload mobility: the ability to pick up an arbitrary set of applications (and their data!), move them over a distance, and do so with an absolute minimum of effort and disruption.

It's an incredibly useful capability, especially if you've got multiple data centers and veritable zoo of applications in your menagerie. 

Move apps to get to newer hardware.
Move apps to get more performance.
Move apps to save some money.
Move apps to rebalance.
Move apps because you need to take some infrastructure off-line.
Move apps to increase protection levels.
Move apps because you've got a new data center location.

No shortage of good, practical reasons of why you'd occasionally want to move a set of workloads.

They put casters on heavy appliances for a reason!

But moving applications around has always been a complex and disruptive pain -- lots of planning, lots of coordination, lots of downtime.  Not the sort of thing that IT professionals warmly embrace with enthusiasm and passion.

But -- for some IT shops -- that's started to change.  And we'll see more in the near future -- I'm sure of it.

Why Is Moving Applications So Hard?

By "moving", what I'm really talking about is  "moving an application from one data center to another, separated by distance".   If you've never contemplated what's involved, you might be asking -- what could be so hard?

The data has to be moved.  The application has to be shut down in one location and restarted in the new one.  Network addresses have to be updated.  The "supporting cast" -- backup, security, management, monitoring, etc. has to be notified and perhaps reconfigured.

On and on and on.

Think of everything you'd have to do to move your home between two states.  For me, it hurts my head just thinking about it.  Lots of interrelated and sequential activities, with  significant disruption involved.  And, of course, no shortage of complaints from the family during the process.

The inherent friction means you won't do it very often -- unless there's a compelling set of circumstances.

Now, take away almost all of the friction.  Take away almost all the complexity.  Take away the dependencies and sequential processes.  Once configured, easily move the whole shebang from here to there anytime there's a good reason -- no drama, no fuss, no complaints.  Just pack and go.

To those who've been in the IT business for a while, this might sound like science fiction.  Well, so did quantum entanglement -- at one time.

But -- very quietly -- workload mobility has now started become a core capability that's getting routinely baked into IT infrastructure.

Meet Katten Muchin Rosenman LLP

They're a good-sized law firm -- 600 attorneys.  What makes them exceptional in this discussion is that -- well -- they don't appear to be particularly unique from an IT perspective.

Their law firm has to do the same sort of bread-and-butter IT stuff that law firms around the world have to do.  Like most of their peers, they have to provide a wide range of capabilities to demanding professionals, keep service levels high, while watching the expense line.

Interesting, but not exactly bleeding-edge IT stuff.  And that's the point.

The press release tells the story.  Business has been good for them.  They needed more data center capacity.  Not a bad problem to have in the general scheme of things.

Rather than simply find a larger facility, the EMC team presented a scenario of active/active data centers where workloads could easily move back and forth with a minimum of hassle.  Keep what you have, just add another increment of data center capacity, and think of it all as one, dynamic pool.

Katten was fortunate in that most of their environment was already fully virtualized using VMware.  And, based on their EMC relationship, they were willing to give the EMC VPLEX approach a try.

Enter EMC's VPLEX

There's a lot to VPLEX, but -- at its core -- it uses very sophisticated caching technology to make data appear in two places at the same time when needed.  That's a very useful trick when you're moving applications around non-disruptively.

It's especially good at doing this with "hot" transactional data and traditional enterprise applications -- as you'd find with   a busy email system, billing database and so on.

VplexVPLEX is currently packaged as an appliance that sits in the data path -- typically a redundant pair at each end of the network.  It works with most popular enterprise storage arrays, including non-EMC ones. 

I wouldn't recommend trying to compare it with similar products, because -- today -- there's nothing else that does what it does.

By itself, it's pretty darn capable.  But couple VPLEX with VMware's vMotion, and you've got a very complete and very robust workload mobility solution.

Since the VPLEX introduction a few years back, it's quietly turned into (yet another) one of those EMC innovative technology success stories.  For example, the VCE folks now routinely use VPLEX to do cool workload mobility demos and zero RPO / zero RTO failover demos on Vblocks.

Life Made Easier At Katten

At Katten, over 250 production applications were moved from one data center to another. With no drama, and no disruption -- and no one the wiser.

Alexander Diaz of Katten offered this observation in the press release:

"For a major data center migration, we were able to move a running virtual machine across our cloud to the new data center 25 miles away in about 15 to 20 seconds with VPLEX and vMotion. With VPLEX's capabilities, I have confidence that we could move the data across an even further distances should our data center needs evolve over time."

"VPLEX kept the storage in synch in both data centers. Four engineers moved 30 to 40 virtual machines the first weekend and then gradually moved over 250 systems during the next three weeks. The servers stayed up the whole time and no one in the firm knew that we had migrated our entire data center. The 'old' way would have meant days or up to a week of downtime for certain systems and a dozen engineers working around the clock."

"VPLEX has allowed us to raise the bar and provide our firm with enterprise-class business continuity—and a truly active/active data center model. Using the VPLEX for our virtual machines and stretch clusters, we can do maintenance and upgrades on hardware whenever it's needed without any downtime. We're also getting more utilization out of our infrastructure by balancing workloads across multiple sites."

"When someone in the firm needs a new application, we're expected to respond quickly. EMC technologies along with virtualization enable us to bring up a new system in about 40 minutes—start to finish. Before, it would take weeks. Sometimes our users think it is magic, so you could say VPLEX gave us a special wand to get the job done".

Here's what I think is cool about this story: we're not talking about a bleeding edge web company, or an intergalactic financial institution, or a military research lab or similar IT exotica.  Katten is a very successful large law firm, and -- as such -- they use IT to get their work done -- it is clearly not an end in itself.

Workload mobility was one of the tools they had access to, and they decided to use it to their advantage.  Not just for this project, but to create a capability they could come back to over and over again.  Good work, guys.

But -- step back a bit -- and perhaps you'll agree with me that this is just the beginning of something much, much bigger.

Scale-Out Comes To Aggregations Of Data Centers?

When you contemplate computing or storage architectures, you quickly get enamored with the notion of properly implemented scale-out architectures that aggregate smaller resources into much bigger pools.

Start small.  Add more performance and capacity in small increments, when needed.  Automatically and transparently readjust workloads and resources as usage patterns change.  Improve redundancy at lower costs.  No downtime.  Use older gear and newer gear together.  Get real efficient.  Get real fast.  Or anything in between at any time.

Why? All resources are one, big seamless pool with no significant walls or boundaries.  That's what scale-out can do for you -- if done right.

Then you start considering cloud, and big data -- and you inevitably come to the conclusion that -- yeah -- this is the way things are going to be done everywhere before too long.  If you're a technology vendor, game on!

But at one level, a data center is really nothing more than a physical container for computing resources -- a really big, complicated server if you will.  The same benefits that come from aggregating computing and storage resources using scale-out approaches within the data center could potentially apply across multiple data centers that have many of the same properties.

Start with a small data center.  Add more performance and capacity in small increments, when needed.  Automatically and transparently readjust workloads and resources as usage patterns change.  Improve redundancy at lower costs.  No downtime.  Use older sites and newer sites together. Get real efficient, get real fast, or anything in between.

But it's one thing to virtualize, pool and create scale-out technologies in the confines of a single data center with very short distances and trivial latencies.  It's another thing entirely to do the same sort of thing with meaningful distances and significant latencies involved.

Years ago, I jokingly described the thought as RAID -- a redundant array of inexpensive datacenters.  Eliminate the impact of moving things over a distance, and how you thought about data centers would change drastically.  VPLEX hadn't been publicly announced yet, but it was clear where the technology could eventually lead up over time.

lI've started to use the phrase "virtualizing  distance" to help describe what's needed here.  Some people here at EMC use the term "federation" or "dissolving distance" to describe similar concepts.

To each their own.

Regardless of the terms used, the ultimate goal is to make the appearance of data center distance to disappear as much as possible -- just as we would want the appearance of "distance" to disappear between pooled scale-out servers and storage nodes in a local setting.

Removing those barriers between resources -- and enabling them to be easily and dynamically pooled -- is what scale-out is really all about.

True for servers.  True for storage.  Also true for multiple data centers.

How Do You Virtualize Distance?

We could wait for someone to crack the speed-of-light problem, but I'm not optimistic.  On a more pragmatic note, I would argue that there are three core technologies needed to effectively virtualize distance for these use cases.

One core technology is the need to virtualize network addressing and topologies over distance and separate domains.  Your world needs to look like one, big, flat network where IP addresses could move around if needed.  That's what Cisco's OTV technology does well, among other things.

Another is the need to virtualize and encapsulate server resources so they can be moved.   Obviously, that's something that VMware's vMotion does uniquely well.   But there's a problem with moving the data -- especially if you want to minimize disruption.

Essentially, you're going to want updated data to be in two places at the same time during the move. That's what VPLEX does uniquely well, amount other things.

Use these three technologies separately, or use them integrated together in something like a VCE Vblock if you choose.  It's there today, and it works quite well.  But I don't think everyone sees the implications just yet.

For me, it's pretty clear: I see people starting to think about data centers differently.  Architectural patterns tend to repeat themselves at different scale; just as we clearly see scale-out concepts infuse server and storage design, we're also seeing scale-out concepts slowly filtering their way into aggregations of data centers.

Perhaps it won't be too long before we think of "adding and rebalancing a data center" much the way we routinely think about adding and rebalancing a compute or storage node today.

And, given the ginormous amount of IT resource that goes into data center planning, construction, implementation, operations, etc. -- that particular shift in thinking is going to end up being a really big deal.

Many Of Our Architectural Assumptions Are Changing-- As They Should Be

Virtualization has changed the way we think about compute.  Tablets have changed the way we think about end user compute.  Java has changed the way we think about writing code.  Flash has changed the way we think about storage performance.  Cloud has changed the way we think about producint and consuming IT services.

On and on and on -- no shortage of the overused "paradigm shifts" to choose from.

Perhaps that's why so many of us are attracted to this space -- there's so much changing all the time.

To this long list, I now want to add "virtualizing distance" -- as evidenced by technologies such as VPLEX -- where we start to think of distance as something to exploit to our advantage -- as opposed to something that has to be merely overcome.


By Chuck Hollis


Thursday, July 19, 2012

BYOD and Back to School…Already?

Popsicles, water balloon fights, fireflies and staying up past your bedtime. These summertime rituals haven’t changed since I was a kid. What has changed is technology and the buying cycle for back-to-school.  Last week in Target I saw an entire wall display of back packs.  My kids have been out of school for exactly one month and retailers are already pushing school supplies!

Sunday I woke up brewed a pot of coffee and sat down with my iPad to check Facebook and peruse my email. Cisco has embraced Bring Your Own Device (BYOD), so I have secure access to my work email on my iPad at home. I checked a few work emails, but I just couldn’t resist the Red, White and Blue 20% off coupon in my inbox.  Had I not seen the back-to-school display last week and received the coupon in my inbox would I be buying khaki pants and blue shirts the 2nd week of July?  Shopping on a laptop is easy. Shopping on an iPad is just downright dangerous!  Consumerism was starting to take over, but in my mind I justified it as one less thing on my to-do list for August.

We have an 802.11n access point at home which gives us reliable coverage.  On Sunday it gave us enough bandwidth to support my basic email and web surfing, our AuPair on Skype with her family in Germany, my husband surfing the web, and my kids watching Back to the Future on Apple TV.  I had uniforms ordered before lightning struck the clock tower and Marty flew the DeLorean time machine back to 1955.

I can’t wait to get the next generation of Wi-Fi— 802.11ac is currently an IEEE draft standard and is expected to deliver up to 1.3 Gbps.  Not quite the 1.21 gigawatts required for Doc Brown’s flux capacitor to reach 88 mph, but enough bandwidth to address some critical pain points faced by users of Wi-Fi today— such as reliable connections for multiple devices and tons of  video!

Back to the Future II was released in 1989.  23 years later many of the technology predictions in the movie are amazingly accurate.  Think back to the television sets of the 1980’s.  They certainly weren’t flat panels.  In one scene Marty’s son uses voice commands to pull up 6 TV shows at the same time. In 1985 I’m sure this looked like an overload of information, but it’s really not that far off from the device and technology multi-tasking  of 2012.  In a typical work day I can think of few scenarios where I’ve been on WebEx or a Cisco TV meeting on my laptop, checking email, Facebook, or texting on my iPhone and instant messaging with a colleague all at the same time.

The use of ubiquitous video is also pretty accurate. In another scene Marty is working from home and accepts a video call from his boss in Asia.  The device looks remarkably similar to the Cisco EX 90  unit that my Director based in Tel Aviv uses to call me via TelePresence at my office in Richfield, Ohio.

And finally, how does back-to-school tie into Back to the Future?  I wonder what Hill Valley High School Principal Gerald Strickland would think of think of the Flipped learning model?  Certainly students can’t be slackers in 2012.  In a flipped classroom students are expected to watch a teacher lecture via video prior to class.  Instead of a one-way dialogue where teachers present to the students, in the flipped learning model students and teachers now use classroom time for interactive discussion.

There’s no question that demand for network capacity and bandwidth is growing. By the year 2015, it is projected there will be 15 billion new networked mobile devices.  Let’s just hope that like the time machine in Back to the Future, that all the devices and technology that we develop are used for the greater good of mankind, oh yeah, and that the bell tower is saved!

For more details on Preparing Your Business for 802.11ac, Register Now for a webcast on July 17.

By Beth Dannemiller

Wednesday, July 18, 2012

Operation High Roller and the Future of Finance

Late last month, Guardian Analytics and McAfee Labs released the joint report “Dissecting Operation High Roller,” which details a new breed of sophisticated fraud attack. Unlike previous attacks using Zeus and SpyEye, these new tactics use server-side components and heavy automation to bypass traditional network security. 

New Heights in Heists: What’s New Compared to SpyEye and Zeus?

Extensive Automation – While most Zeus/SpyEyes attacks rely on active participation by the fraudster to process a fraudulent transfer, most of the High Roller attacks were completely automated. This allows for repeated thefts once the system has been launched at a given bank.

Server-side Automation – Operation High Roller also adopted sophisticated server-side automation to conceal how the system interacts with online banking platforms. By moving fraudulent transaction processing from the client to a fraudster’s protected server, activity becomes more difficult to detect.

Rich Targets – The United States victims were all companies with accounts with a minimum balance of several million dollars (hence the name, Operation High Roller). Most of these victims were found through online reconnaissance and spear phishing.

Automated Bypass of Two-Factor Physical Authentication – The malware discovered within Operation High Roller is the first to work around the “smartcard/physical reader + PIN” combination of two-factor authentication. Normally, a victim inserts a smartcard into a reader device and enters a PIN, generating a digital token to authorize the transaction. The Operation High Roller attacks are able to generate an authentic simulation of this process during login to capture the token, using it to validate the transaction later in the online banking session.

Fraudsters Know the Banking Industry – The bad guys behind Operation High Roller clearly knew what they were doing as they carefully navigated around the regulatory triggers of bank fraud detection. For example, automated transactions were set to check the balance and not to exceed a fixed percentage of the account value.

The Future of Finance and Network Security

The Operation High Roller attacks hold implications for banks of all sizes, as targets ranged from some of the most respected financial institutions to small credit unions and regional banks. Moving forward, the finance industry should anticipate more automation, obfuscation and increasingly creative forms of fraud.

However, there are fraud prevention solutions that have been proven effective, even against the attacks documented in Operation High Roller. Anomaly detection solutions like those integrated into the McAfee Network Security Platform have been proven to detect the widest array of fraud attacks, including manual and automated schemes, as well as well-known and newly emerging techniques.

Share your thoughts on this topic in the comments below, and be sure to follow @McAfeeBusiness on Twitter for the latest updates on McAfee news and events.

By Tyler Carter

Tuesday, July 17, 2012

Human Power Still Fuels Network Security

Earlier this month, I had the honor of presenting the keynote at the Cyber Defense Symposium 2012.  I was invited to share the McAfee strategy and my vision around how we’re working to develop network protection against growing global cyber security threats and protect critical information networks of our financial markets, power grids, intelligence and defense systems.  Not only was I excited to be part of this cause, but I was very interested to hear the ideas from other business leaders around this world-impacting topic.

During my 20-minute speaking slot I shared a bit of background on the security landscape, what McAfee has learned from years of threat intelligence and analysis, and the protection we’ve developed.  I then moved into our vision of creating a future of even greater business confidence.  I wanted the audience to understand, given the growing impact of highly targeted attacks, that the industry needs to get out of the mode of ‘reacting’ to threats. I explained that McAfee has focused, and will continue to focus, on creating solutions that can anticipate all strains of cyber attacks and implement preemptive measures. With this intelligence, we can then develop and deploy sophisticated and targeted protection in advance of the malevolent actors.

After I took my seat, I was able to listen to the presentations from other business leaders throughout the industry.  The information was good, but I learned that at least one security provider believes that it’s unnecessary and archaic to integrate people into the cyber security foundation.  This intrigued me.  In my opinion, and in line with the McAfee philosophy, it’s absolutely critical to invest in human middleware to provide proactive security. Expert researchers are the key to identifying and preempting zero-day attacks.  Of course, a vendor that cannot afford the high cost associated with security research will try to argue that compute cores and software can deliver a similar level of protection.  I say, “Show me the money.”

Yes, I am proud of our innovations – in finding new ways to help enterprises actually prevent the negative impacts of attacks whether they are botnets, malicious URLs, malware or anything else.  But I am even more proud of our people.  I know we have the best team in place to create a platform that can anticipate the strikes, which is why we invest heavily in human capital.  We know that it’s critical to integrate predictive protection against the multiple threat vectors and attacks being created and injected into business networks through dozens of new entry points.  It was my intention to encourage enterprises to rethink their security architecture and refrain from a very common and pervasive reactionary strategy.  Putting people at the core of network security will augment the holistic approach McAfee believes will protect million-dollar enterprise networks from the typical (but stealthier) dime store hacker.

By Pat Calhoun

Thursday, July 12, 2012

IT Transformation: Does Your IT Group Fit The Profile?

I started writing in earnest about clouds and IT transformation back in January 2009 -- a little more than three years ago.

At the time, most IT groups would politely listen to me talk about the coming world of service catalogs, hybrid clouds, et. al., nod a bit -- and that would be that.

Interesting, but not practical for them at that point.   Thank you for your time.

But, every so often, I'd meet an IT group that was intensely interested in the subject.   Their passion and enthusiasm stood in stark contrast to a vastly larger and vastly more passive crowd.

What made them different?  Why were they so interested, and everyone else wasn't?

I became very curious about these particular customers, and started to take the time to look deeper.  I began to construct profiles of the IT groups I was talking to, and looking for a correlation between their expressed enthusiasm (or lack thereof), and what I could discern externally.

Over time, patterns repeated themselves, and an "ITaaS profile" (IT as a service) began to emerge.

During early 2011, I began to test my model.  I'd research a customer I was scheduled to talk to, score them using my profile, and attempt to predict the likelihood they'd be interested in the topic of IT transformation when I got in front of them.  I quickly discovered I had a very good predictor.

It's sort of nice to know what's going to happen before you show up with a bunch of powerpoint :)

So, today, I thought I'd share with you what I've learned.

And, perhaps, figure out if your IT group fits the profile.

Real IT Transformation Isn't For The Faint Of Heart

Organizational transformations come in many sizes and shapes.  The particular transformation I'm discussing here is nothing less than a fundamental shift in the business model of an enterprise IT group.

IT starts by assuming they need to compete for internal IT spend, and aren't a monopoly or a government agency anymore.  They then set about to refashion themselves as the internal IT service provider of choice.  Along the way, they add new functions they didn't need in the past.  Roles, processes, skill sets and metrics all significantly change. 

The ITaaS model itself isn't all that new -- you'll find it at work inside every successful IT service provider.  It's just sort of new to the enterprise IT crowd.

Unless there was a strong motivation to change, why would you even bother?  Wouldn't it be easier to just continue on as before, making small and incremental changes as time and inclination warrants?   Why would you invest in completely rewiring the IT function?  Who would want to re-engineer how IT is produced -- and how IT is consumed?

To sign up for that mission, you'd have to be pretty motivated indeed.

And that's what my ITaaS profile is all about -- discerning who's motivated, and who's not.  Quickly and efficiently.  It's not a judgment statement; it's simply acknowledging that everyone's situation is different.

In my ITaaS profile, there are three things I'm looking for.  Hit all three aspects, and there's a good chance that we'll have a long and rich discussion around the IT transformation topic with plenty of follow-on and a deeper dives.

One of the inevitable outcomes is that you'll start to look at technology very differently, e.g. how well they can be used to deliver services vs. isolated functions.  Things like Vblocks become much easier to appreciate, for example.

Miss any one of the profile elements, and your interest in ITaaS will probably be academic at best.  You'll wonder why we're building some of the stuff we do.  And we'll probably end up talking about something else that you're more interested in.

So, what am I looking for?  See how many of these apply to you.

A Meaningful Change In The Business Strategy That Directly Impacts IT

If it's business-as-usual for the organization at large, it's probably business-as-usual for the IT group as well.  Conversely, change in the business inevitably drives change in the IT function.

Unless there's a sharp and disruptive change in the business approach; there's unlikely to be the need for a sharp and disruptive change in the IT approach.   It's a fairly simple idea when you think about it that way.

So, what constitutes a "meaningful change in the business strategy"?   Let me share what I'm looking for.

Obviously, if there's a new CEO or other senior business leadership, there's a good chance the organization will start to move in a new direction.  IT will inevitably be affected at some point.

A significant business strategy, new product line, or a change in the go-to-market strategy qualifies as well.   A sharp increase in competition, a rapid decline in revenue or profits, or perhaps a major batch of new regulatory requirements.  Significant M&A certainly qualifies.  Expanding into new geographies (e.g. China, India, et. al.) qualifies.  And that's just a partial list.

These external markers aren't hard to find.  A bit of google-fu, and it's pretty easy to discern if there's a visible shift in the business causing a shift in IT.

A Critical Mass Of Empowered IT Leadership

By "empowered leadership", I mean there's a clearly discernible team in place with a mandate to change the way things are being done in the IT organization.

They're not looking for mere incremental improvements; they're motivated to re-engineer the IT function.

Sometimes, these IT leaders are folks who've been with their company and in their role for a while.  That's a great thing -- when you find it, but you don't find it too often.

More likely, these empowered leaders are new to the role and brought in from elsewhere: another company or industry, perhaps from the business side, or maybe a substantial re-organization.  Sad but true: familiar faces tend to make familiar decisions; new faces tend to make new decisions.

Put differently: there isn't going to be any IT transformation without a critical mass of strong, passionate and dedicated leadership.  This stuff doesn't happen by itself.

A Reasonable Starting Point

This is actually two things: something to build on; and something to point it at.

You can't (or shouldn't) try and change everything at once, especially in larger IT settings.  Instead, the idea is to put the new concepts and operational models to work in a small part of the IT function, and then expand as processes mature and familiarity grows.

But you need a place to start, and I've found that not everyone can easily find that logical starting place.

The "something to build on" is pretty easy to assess: a reasonable mass of virtualized servers (usually VMware) and the required skill sets to make it all work predictably.  Unless you're comfortable with virtualization technology and processes, it's just too far a leap to jump to an ITaaS model in one go.  And, yes, you'll find environments out there that just haven't gotten around to doing much with VMware, or haven't invested in the skills and processes to fully exploit what it can do.

In these cases, there's a good deal of foundational work to get the basics in place -- prior to contemplating an IT transformation of this nature.  No shortcuts that I've found so far.

The second component is finding an interesting place to point the first efforts: a use case that's (a) somewhat relevant to the business, (b) isn't mission critical, and (c ) whose needs aren't being well-served by the current IT approach.  Better yet if you can find some business leader who's interested in a new approach to delivering IT services ...

Examples vary, but there are some popular favorites: test and dev for the app team, perhaps a VDI pilot, maybe a self-serve IaaS capability for the R&D team, and so on.

It's less about the individual project itself; it's more about creating an operational template for the future.  You're simply looking for a place where you can try doing things a new way, gain some experience -- and without betting the farm.

What Are You Interested In?

If you fit this profile, you're most likely not very interested in a detailed technology discussion.  You tend to assume that the required technology pieces are largely in place to do what needs to be done.  And you'd be right.

Instead, you're most likely interested in four things -- in order.

First, you're probably interested in creating the case for transformation.  You, the progressive IT leader, are largely convinced.  But there are a few more people in the organization that might need convincing: the executive committee, your business peers and -- of course -- your IT organization itself.

Making and communicating "the case" to these three audiences is important, and not something that's done routinely.  Here at EMC, we are fortunate to have plenty of assets as well as some nice professional services to help customers do just that.

Second, you'll want to study some blueprints: what's the new model, how is it different than the current one, what makes it different, etc.  We routinely supply our own examples of this to customers as a matter of routine.  Your blueprints may end up being somewhat different, but you'll certainly want to take a look at a successful one as a reference point.

Third, you'll be very interested in the people side: roles, skills, measurement, alignment and the rest of it.  If our own internal EMC ITaaS transformation is any indication, a significant number of the job descriptions will have to be seriously reworked in the new model.  I like to joke that -- in this new world -- the "run book" for IT belong to HR -- human resources.

The larger your IT organization, the more you tend to be interested in the people side.

Finally, you'll want to think about how you eventually grapple with the underlying financial model for IT.  I have observed that the way IT is constructed is largely a function of how IT is paid for.

If IT is paid for as a flat tax, you'll likely have a relatively bureaucratic IT function.  If IT is funded through major projects, you'll have an IT organization comprised of major projects and little else.  However, if IT funded to create attractive IT services that business people want to consume, well -- that's what you'll end up building.
  • Making the case.
  • Understanding the models.
  • Transforming your people.
  • Modernizing the IT funding model.
Like I said, not for the faint of heart.

Putting It All Together

So, consider these three aspects I've learned to look for:

- a meaningful change in the business that's causing a meaningful change in IT
- a critical mass of empowered IT leaders
- a reasonable place to start: technology and use case

What I've found is that -- once you fit this profile -- you're strongly interested in IT transformation.  And if this isn't you, we end up talking about something else more relevant to you and your particular situation.

Note what's missing from this list.  For example, there's scant mention of ROI -- unless it's Risk Of Ignoring.  Nor are there many caveats around industry, geography, organizational size, etc.   Or much of a budget discussion.  Or the fact that some of your software vendors don't like virtualization.  Or a detailed technology discussion.

None of these topics really comes up in any depth -- if you fit the profile.  Instead, you are highly motivated to get moving in the right direction -- sooner, rather than later.

I look at all the cloud banter on the internet and in the press.  I read it, but very little of it correlates with the hundreds and hundreds of customer discussions I've had on the topic over the last few years.  I think vendors might look at the world differently.

I know what to look for.  I know who's going to be interested, and exactly why they're going to be interested.  If they fit the profile, I'm relatively certain of how the conversation will progress from topic to topic.  It's scary sometimes just how linear the conversations can be.   I've written about most aspects; you can see the entire collection of deep-dives here if you're interested.

But this is really isn't about me.  It's about you.

Does your IT group fit the profile?

If so, we should chat :)

By Chuck Hollis

Tuesday, July 10, 2012

70% of Teens Hide Online Activities from Parents—Why We Should be Concerned

Most major media picked up on a study that McAfee released called “The Digital Divide: How the Online Behavior of Teens is Getting Past Parents” that shines a scary light on how much trouble kids are getting themselves in online and how clueless most parents are.

Many people commented saying “I don’t need McAfee telling me kids lie” and I get that. But those who recognize the obvious may not realize the actions and consequences of those lies.

I’ll be the first to admit, and I’ve said this on national TV and radio, I should be buried 6 feet under based on the way I lived my teen years. I lied as a means of survival to cover up my various acts that would have surely got me the belt. But what I did compared to what teens are doing today was a different kind of trouble.

People snicker when they learn that almost half of teens are looking at porn weekly. Really? This is no big deal? It’s true they say “we become what we think about” and a 13-year old isn’t in an emotional or physical position to be consuming hard core violent porn.

Another example is that more than 10% of 13-17 year olds are meeting strangers online then actually meeting them in the real world. I doubt before social media there were as many teenage girls meeting 30-year old men on the street and then getting in his car. But with the Internet these “friends” can seduce teens girls via text or social networking sites and fill her emotional needs until he’s “got her.”

Are you really aware what this hidden behavior and lying is concealing? From the study, McAfee revealed that teens readily admitted to:

    Breaking into others’ social media accounts
    Hacking and manipulating grades in school
    Downloading illegally pirated movies, music and software
    Bullying, whether it was actively being a bully, being bullied or witnessing bullying

All of these activities could potentially get you, as parents, involved in numerous lawsuits because of these illegal activities.

This study more than anything points out how outrageous kids are acting online and how oblivious and overwhelmed their parents are. Perhaps Kevin Parrish, journalist and parent of teens from Toms Guide summed it up best when he said:

“The Internet can be a dangerous place, and allowing teens to run free in a virtual new frontier seemingly run by hackers is just downright insane. Allowing children to do whatever they want online is a huge security risk to your personal data, and a potential legal risk for them. Bottom line, the Internet is a privilege, not a right. Teens should be allowed to express themselves, but not to the point where predators come calling or the FBI comes knocking at the front door. Teens are propelled by emotion, not knowledge and experience, especially early on.”

At least one parent gets it.

By Robert Siciliano

Monday, July 9, 2012

5 Lessons from the LinkedIn Password Hack for Online Retailers

Whether it’s clicking on spam links or using “1234” as a password for both Twitter and banking, online merchants will always have to be mindful of consumers’ impressive ability to jeopardize their own information. It’s man against the machine. As merchants try to stay one step ahead with the latest security tools, it can be frustrating when customers unwittingly play into the bad guys’ hands.

We gained some valuable insight into this world of user-based threats just this month, when over six million LinkedIn passwords were breached and posted online. As it turns out, there were hundreds of duplicate passwords and patterns, most having to do with the site’s theme – ‘link’, ‘work’ and ‘job’ were all among the top five.

What retailers need to realize is that strong password management doesn’t just mean protection for customers. It also affects your bottom line – protecting your business from fallout in the event that a customer’s account is hacked.

So how can you, as an online retailer, help customers help themselves?

1. Be proactive and warn of  “phishy” emails

One of the most surefire ways to get hacked is by clicking on malicious links in email. Email is ridiculously easy to forge, and links are easy to manipulate and redirect. Does your address bar read “ebay.com” or “ebayy.com”? Most consumers won’t think to check. This can become a serious problem for online companies, as LinkedIn now knows all too well. Hackers are quick to exploit news of a breach by crafting phishing emails – phony messages that mimic the language and style of your company’s messaging to extract sensitive information.

Help teach your customers security best practices by example – never send or request private information via email, including passwords or identity verification. Remind your customers of these security measures in the emails you do send, helping to decrease the likelihood that a malicious actor can leverage your content for a phishing attack.

2. Follow the news

When a crisis occurs that could affect your customer base, let them know! Staying on top of the latest news and keeping your customers up-to-date isn’t just about password protection – it’s also about customer service. Looking out for your customers is the hallmark of great service both on and offline, and providing security insight is one way to build trust and loyalty for your online presence.

3. Be an educator

Retailers need to lead by example. Customers don’t always know how to create a strong password, and they don’t always understand why using different passwords for different sites is so important. Educate your customers by encouraging a password that contains a combination of uppercase and lowercase letters, numbers, and symbols with a minimum length – and let them know why. A general industry trend is to include a clause during sign-up that shares your company’s security policy, or a widget that rates their password strength from weak to strong.

4. Easy password recovery

One of the highest cost elements of customer service is dealing with lost passwords, and this will certainly become an issue if your company recommends password complexity. It’s like locking yourself out of your house or apartment – it may not happen often, but it happens to everyone. Your site must have a simple, secure and straightforward procedure for managing customer passwords that will make them easy to retrieve on-demand. Once customers know that their password information can be securely and easily recovered, they’ll be more likely to choose a variety of complex passwords across accounts – increasing their overall security.

5. Security from the inside out

By implementing a trusted website vulnerability scan like the McAfee SECURE™ service, you can proactively protect your business with daily scanning that checks for thousands of vulnerabilities that could lead to security breaches. The cost of a breach can be devastatingly expensive, especially for small retailers, when you take into consideration legal fees, call center expenses and lost employee productivity. When you add in the impact on brand image and the loss in customer confidence, this fallout could literally end your career as an online retailer – a fate no business should take lightly.

Unlike most consumers, retailers have a responsibility not only to themselves, but to the entire eCommerce community. It’s up to you to educate and protect your customers, which will in turn boost trust in online retail as a whole.

Share your thoughts on this topic in the comments below, and be sure to follow us on Twitter at @McAfeeSECURE for the latest eCommerce news and events.

By Nancy Levin

Friday, July 6, 2012

More On Mobilizing Your Enterprise

Just about every aspect of enterprise IT is in play these days, if you think about it.

One of the more challenging aspects is fully embracing the new endpoint for IT service consumption -- the ubiquitous mobile device.

To be clear, we're not simply talking about a BYOD (bring your own device) program, or re-hosting legacy apps on mobile devices via VDI -- although those are pieces of the bigger picture.

No, what we're really talking a fundamental re-thinking about how application experiences are built, distributed and consumed.  Mobile first, if you prefer.

And whether that new capability is pointed at your employees, your partners or your customers -- the changes are turning out to be very far reaching indeed.

EMC's own IT group is no stranger to these forces.  For the last few quarters, a small team has been working towards enabling mobility across our entire organization and business model.  I first introduced this story back in January of this year.

Now, five months later, I thought it would be good to circle back with the EMC IT team and get a status report: what have we done, and what have we learned?

Quite a bit, it turns out.

 What Makes This Hard?

The rationale behind this transition is pretty obvious to just about everyone: we live in a mobile world.  We all carry powerful easy-to-use mobile devices, and we all prefer easy-to-consume application experiences to go with them.  The desktops and laptops are getting less use; the tablets and smartphones have quickly become our first go-to device.

We -- as IT consumers -- have made our preferences pretty clear.  Now it's up to enterprise IT organizations to figure out what to do about it.

And it’s not easy.  For example, IT has to think about devices and networks differently.  We have to think about security differently.  We have to think about how applications are constructed and consumed differently.  And, oh by the way, there are still plenty of desktops and laptops hanging around that aren't going away anytime soon.

But -- like most things in life -- there really isn't a choice about the general direction: it's pretty much a given.  The devil is in the details: how to organize, how to build capabilities, and how to mainstream into the general flow of activities.

In a nutshell, that's the idea behind an enterprise mobility platform and overall strategy: move the organization into the new world, and -- hopefully -- make it better than the world it replaced.

Catching Up With KK

One of my favorite characters at EMC is KK -- Narayanan Krishnakumar, the chief architect within EMC IT.  He's always working on cool stuff, including playing a very strong role in the definition and execution of EMC IT's approach to the mobility challenge.

So, KK, where is the team today?

We've made good progress over the last two quarters.

We've got our MDM (mobile device management) platform up and we now are actively managing 4500 mobile devices around the globe.  Once a device is registered, we essentially take control of the device from a security standpoint.

We've had a few issues in certain countries where it's not acceptable to remotely wipe an employee's device, but -- generally speaking -- we've seen strong adoption.

Part of the adoption success has been an important "carrot" we've created, which we call mWiFi.  If you walk into an EMC facility anywhere and have registered your device with us, you're instantly on our wireless network: no need to manually configure and authenticate.  It sounds like a little thing, but it's enough of a practical convenience that it's driving adoption nicely.

We have an enterprise app store that enables us to make available the right apps for the right users and manage the distribution. We now have three categories of apps in our enterprise app store.  One, obviously, are pointers to useful apps in the Apple app store.  Another category are mobile applications from enterprise vendors, such as SAP BI.  And there's a third category of EMC IT-developed applications.

Securing the first category is fairly straightforward, as is the category of our internally-developed applications.  We're still working through the best way to secure vendor-supplied enterprise mobile apps, but we're making progress.

Our EMC enterprise app store authenticates you, it validates your device characteristics, shows you the applications you're entitled to, manages the installs and updates, logs all the relevant data, and so on.  We had to do a significant amount of building on top of the vendor-supplied platform to achieve all of this, though.

There's a lot to talk about here, so let's start with security -- that's a big issue with everyone.

As it should be.

Our approach starts with securing the device, for example, we can detect if someone has jailbroken their phone.  We've created an EMC container that provides local encryption services, as well as REST API access to EMC authentication mechanisms.

Ultimately, the application designer has to be responsible for how they'd like to handle security; we supply the services they'll need to accomplish that.

We've tried out the approach on applications that aren't all that demanding, like our conference room finder, as well as around applications that handle secure data, like our employee lookup database.  We’ve been very pleased with the results at both ends of the security spectrum.  Now we believe the right approach is to provide the services that application developers need.

You mentioned that EMC IT had created a mobile application container -- can you say more about that?

Well, we discovered that people were using groups of applications during the day, rather than just an isolated one once in a while.  By using a container-based approach, we can provide, for example, a persistent security identity across multiple applications in a container vs. forcing the user to re-authenticate themselves with each and every application.  The same sort of thing applies to data sharing between applications, like cut and paste.

We couldn't find anything we really liked on the marketplace, so we decided to build our own lightweight container which enables the user experience with key shared services.  It’s turning out to be a win for the application developers as well as our infrastructure and operations groups.  And it results in a notably better user experience across applications.

What about application development -- how are you approaching that?

Well, I think we all realize that mobile application development is somewhat different than traditional enterprise application development.  You're not simply miniaturizing a desktop application, you're thinking about finger-friendly applications that run natively on the mobile device.

And one thing we've learned is that a great user experience is absolutely paramount.  People's patience with mobile apps is incredibly short; they hit any sort of bump and they're off doing something else.

And of course, we have to always balance the user experience with risk, so we strive for as seamless a model for sensitive data access as possible.

Early on, we created a small design center around mobile technology competencies.  Initially, we had high hopes around HTML5, but we're not waiting for the standard to stabilize - we have been going after more of a hybrid approach to native app development.

So the developers mostly work in a cross-mobile-platform javascript environment, and then augment with native iOS Xcode -- again, using the services we've provided in the mobile container. That coding environment is augmented by a design–time repository with the catalog of services we support, as well as a separate runtime repository for things like license entitlement.

Our Mobility Technical Competency Centre team supports anyone interested in building a mobile app -- be it a business user, or an IT application developer who needs to build a mobile version of some sort of application.  The goal is to slipstream mobile application competencies into our day-to-day application development work so it's just a normal part of how we do things going forward.

The implication is that you're embracing iOS and leaving Android and others for later?

We started off saying that we would rather support one platform really well, rather than spread our efforts across multiple user devices.  While not everyone is a 100% fan of that approach, it's what we have been  doing, and it's worked out rather well.

However,  the cross-mobile-platform development environment I mentioned allows us to reuse the code base for Android as well. We are doing exactly that for a customer-facing service request app.

You didn't go for a MEAP -- a mobile enterprise application platform?

No, we really didn't find anything we liked.  For one thing, the marketplace is moving very fast, and we'd like to make as few big bets as possible, or at least to delay them as long as we need to.   We're a large enterprise with many hundreds of potential use cases, and we didn't want to limit our abilities by signing up for a finite and bounded approach.

More importantly, we've got our own application integration cloud that we're using for all enterprise applications going forward, and we really don't want to have some vendor's idea of an application architecture impacting what we're creating for ourselves.

So it seems that we've made a great start -- what do you think will really drive adoption?

Well, one big driver is our new SAP implementation -- we're using it as one of the cores of our business.  We expect that most EMC employees -- and, over time, partners and customers -- will be interacting with our SAP implementation at least at some level.

We've made the decision to think "mobile first", so we fully expect that most everyone will be interacting with SAP through a mobile device.

Now that the EMC IT team is well down the road, what advice would you offer for others?

The first point might seem obvious, but it needs to be stated anyway: you'll need to organize for success.  When we started, we didn't have the skills, the team and the organizational structure to make progress.  From a modest start, we've augmented and enhanced our organizational model as we've progressed, but it all centers around having the right model.

Second, we realized our goal was to enable mobility across EMC's entire ecosystem, and not just stand up a handful of mobile apps.  We want to think mobile first and foremost going forward, which means you tend to think in terms of sustainable and scalable platforms and processes vs. specific point technologies and isolated use cases. We also think in terms of the entire "stack" of app use cases, app user experience, security and risk, access, and supporting services.

Third, we realized we had to learn our way into this arena, and that means a healthy incubation period followed by branding your efforts as "beta" for quite a while.  We'll be running as beta for at least three quarters, maybe longer.

And, as I mentioned before, no matter how well you think you've got the user experience nailed, you can always do better.  That's turning out to be very important.

We also realize that the broader IT industry is also learning their way into this space.  There are very few off-the-shelf approaches we found that met our needs.  That's OK, those will come in time.  As I mentioned before, we're trying to delay any big technology bets in this space as long as we can.

Finally, we're very mindful of enlisting our users and our business partners as part of the journey.  We're very open and transparent as to what our capabilities are, where the potential problems may be, here's what we can do now, here's what we can do later.  While I'm sure there are some who'd like to have everything today, we're getting extremely positive feedback on where we are and where we'll be before too long.

For example we use our internal social platform (EMC|One) to communicate, get feedback and improve the user support experience.  In particular, we're using our internal Innovation Conference to sponsor a contest to come up with a list of potential "killer apps" for mobile devices.  There are literally dozens of great ideas there to go consider, and I'm sure we'll see more in the future.

If I had to offer a single big thought, it's that you're investing in a new way of delivering IT services that people want to consume.  It's not really just about saving money, it's about delivering new forms of value based on what the technology can now do.

Not only are we excited within IT -- it's a great project to be associated with -- but the business is seriously excited as well.  We have established great collaborations with the business groups and we’re all changing the traditional way of thinking about applications.

Great work, KK, and a great story.  My congratulations to the EMC IT team.

Thanks -- it's been a big effort on the part of a lot of people.  And we're not done yet -- not by any stretch of the imagination.  But we're confident that we've turned a corner and can plainly see where we'll be next quarter, and the quarters after that.

It's very exciting stuff, indeed.


By Chuck Hollis

Thursday, July 5, 2012

Welcome To The Information Age


In this blog, I get to selfishly write about what interests me.  That's what blogging (or at least *good* blogging) is supposed to be about.

You can easily tell some of my stronger interests: what's going on here at EMC, information technology and the IT industry, customer and partner interactions, business and economics, psychology and organizational theory, even the occasional venture into topics like careers and life skills.

I'm also an armchair student of history.  Not the names-and-dates kind of history; more of a general fascination with the broader patterns and models of how humans evolve as a collective enterprise.

And, if you'll permit me, let me share a brief history of us.

The Gathering Age

We evolved in a world where the food we needed was there for taking -- if we gathered it.  Plants and wildlife were plentiful -- if we knew where to look and put the effort in.  And, of course, we had to avoid becoming a meal for something bigger than us.

In this age, life was mostly about sustenance.  Little energy could be afforded to be spent on other pursuits.

To make life easier, we learned to form tribes of aligned individuals who banded together, divided labor, and protected each other.

We developed a rich culture, but didn't have the tools to readily preserve or share it.  Our collective information base was extremely limited: tribal knowledge, cave paintings and the like.

The Agrarian Age

We slowly learned that if we settled down in one place, we could grow our own food.  Sustenance, while still important, now did not consume our every waking hour.  Food could be stored from one season to the next; it could be traded for other goods.  There was increased incentive to specialize what we did.

Tribes become villages; elders were replaced by early government.  We built roads to move our goods to marketplaces and storehouses.  We invented currency to trade more freely.

And we learned to write.  We started to codifying our thoughts, our knowledge, our experiences -- and share them with others.  Information started to flow.  Pen and parchment had its limitations, though.

But our collective information base started to grow.

The Industrial Age

We learned to extract raw materials, manufacture and distribute things at enormous scale: tools, buildings and other familiar artifacts.  Food became more plentiful, there was time and energy now to spend on other pursuits.

Transportation was needed to move goods from place to place -- this is the era of railroads, steamer ships, semi rigs and eventually air transport.  Finance became important: we invented banking, insurance and capital markets.  Towns and kingdoms became nations, blocs and eventually superpowers.

And we learned to print and distribute information widely.  Books, periodicals, libraries and mandatory education became the norm in many parts of the world.  The collective wisdom of the ages was thus largely available to anyone who invested the time to read.

Our collective information base began to grow very rapidly.

The Information Age

We learned to capture and distribute digital information.  We found that just about anything could be sampled, measured, analyzed and understood.  Raw data became information became insight and eventually more knowledge to add to the collective pool.

We built vast digital networks and data centers to move, process and store information at massive global scales.  Newer digital businesses that had no real analog in the previous era became commonplace.  Even money was no longer thought of as a physical thing; it had become a digital entity.

We learned that we could spend our lives completely bathed in information and content, if we chose.  There was very little of the human experience that couldn't be searched for, understood and shared with others.

Power clearly shifted from organizations and governments back to collections of aligned individuals.  The traditional divisions between nations and societies became less pronounced as we learned that we were all very much alike in our hopes and aspirations.

Our collective information base grew exponentially, but there was more.

We started to actively mine those huge repositories for new understandings and new insights about the world around us.  Understanding the data around a thing directly lead to a deeper understanding of the thing itself.  Data science become a profession in its own right.

All at once, things we thought we understood well in the previous era were starting to be redefined before our eyes.  Education.  Healthcare.  Science.  Business.  Economics.  Government.

Commonplace social interactions have become very different, e.g. I saw your poke and tweeted you back.  How would you have parsed that statement even ten short years ago?

Even parenting isn't what it once was.

Welcome To The Information Age

Past societal transitions took centuries; this one appears to have happened in a few short decades.  The term 'digital native' is a meaningful one; it signifies someone who's never known a world without powerful devices, pervasive networks and ready access to our collective information base.

Also a useful term: the 'digital divide' -- the worrisome gap between the digital haves and have-nots.  Not every part of the world moves forward at the same pace, unfortunately.

I was not born into a digital world (and thus a digital immigrant) but like many around me, I learned to adapt quickly.  I realized what was happening, and made a decision to invest in the new tools and skills.  Many of my generational peers chose different paths.

One thing hasn't changed over time; we've always defined by what we produce.  In the first era; the food we gathered and hunted.  In the second, the goods we grew or traded for.  In the third, what we made or did.

And, perhaps, in the fourth -- what we uniquely learn, create and share.

If you think about it, this blog (or most any other digital asset) likely wouldn't have existed even a short while ago.  It would have been too hard to create, too expensive to distribute, too difficult to consume -- and besides, there'd be little to gain from the effort.

Now I, like billions of others, can easily ingest vast quantities of information, analyze to my heart's content, write up what I believe to be important, and effortlessly share it with vast audiences around the world.  The friction historically associated with information has almost completely disappeared.

I continue to be amazed by the whole phenomenon.

My inputs are often the outputs of others.  My output is often someone else's input.  Their output then becomes someone else's input.  And so forth and so on.  It's the new economy -- the information economy.  It often moves at the speed of thought.  Comforting amounts of latency that we used to have to figure things out are largely gone.

Society has gone real-time -- that's so 3 minutes ago.

How Quickly We Adapt

I believe it's this speed of adaptation is a prerequisite for success in the information age -- not only for us as individuals, but for our collective businesses, organizations and governments.  The interwebs are chock full of stories where an individual or an organization didn't realize how the rules had changed around them, and suffered the consequences as a result.

We laugh -- until it happens to us.

Many years ago, I used to read all sorts of business books looking for ideas.  Not so much anymore.  The latency between ideation and implementation is now just too long; the idea often goes stale and is quickly replaced by a better one.  Besides, if an idea has legs, you'll pick it up on your web radar long before it goes into print.

Sometimes, we get asked for EMC's five year or ten year strategy.  While we certainly have some good ideas of how things will likely evolve, we also know that anything we write down will probably change before too long.

Ten years ago it was 2002.  At the time, I couldn't have made even a wild guess at 2012.  That's the world we live in.

When I get in front of IT audiences, I always make a strong case that -- in rapidly changing times -- agility matters more than anything else.  It's not meant as a simplistic buzzword -- I think it's essentially a strategy for survival.

Especially in the information age.


By Chuck Hollis

Wednesday, July 4, 2012

The Modern Boardroom is Empty

In the not-too-distant past the boardroom was a place where executives met to plan the future of the company, analyze the competition, discuss satisfaction and retention, and generally come together to brainstorm how to accelerate success. On occasion guests were invited to the boardroom – for example,  top customers who required an executive briefing or an employee celebrating 25 years at a company.

Executives would spend hours, if not days, traveling to the boardroom to meet his or her peers face-to-face.  The boardroom would be filled with executives sitting down to hammer out the company’s top initiatives.

Today’s boardroom is empty.

Or at least not quite as full as it was a few years ago.

Could it be that executive teams are hosting less face-to-face meetings? Is the desire for personal interactions decreasing? Do people value meetings less now than 5, 10, 15 years ago?

If anything the opposite is true: people want to meet more often but are more geographically dispersed. Executives want the boardroom experience without physically being in the boardroom. Work / life balance (which I personally refer to simply as “harmony”) has made all of us – executives included – demand more time at home which means less time in the boardroom.

Companies like Advocate Health Care and AXA Group have figured out how to bring the boardroom experience anywhere, at anytime.

Boardroom redesign leads to innovation at Advocate Health Care

Advocate Health Care has a very unique boardroom which sits at the heart of its headquarters in Oak Brook, IL. The multi-purpose boardroom is a meeting spot for board members, a place of celebration for associates commemorating a 25-year (or more) anniversary (and there are MANY of them which demonstrates what a great place Advocate is to work but that’s a topic for another blog),  and even to film this video testimonial about innovation in the Advocate boardroom and beyond:

The Advocate team describes how adding Cisco TelePresence to the boardroom helped:

-          Executives and associates collaborate more effectively and get face-to-face more quickly

-          Reduce travel time and improve productivity to better serve patients. For instance, executives that previously spent a week traveling to individual sites can now meet with each site  from the boardroom, saving days of travel.

-          Include more key people in critical meetings as evidence by Dr. Rishi Sikka, Vice President of Clinical Transformation, “At the end of a long day, no physician wants to drive an hour each way for an in-person meeting. But if they can walk into a TelePresence room in their own hospital right after surgery…they appreciate that. We see more physicians in virtual meetings than we ever saw in our in-person meetings.”

-          Save money. “if you get a group of highly-paid physicians into a central location without requiring them to travel, that translates into a considerable cost savings for the organization,” says Dr. Lee Sacks, Executive Vice President and Chief Medical Officer at Advocate

Dr. Sacks, also commented that prior to the TelePresence deployment, a meeting with 20 surgeons would have been difficult, if not impossible. But with Cisco Telepresence, not only is it possible, it is extremely effective and also provides considerable cost savings to the organization.

The results are eloquently summed up in this statement from Dr. Sacks, “We held a single TelePresence meeting for 20 spine surgeons to discuss implantable devices. I’m certain that we paid for our entire TelePresence investment with that one meeting.”

And while there are less people using the Advocate boardroom, productivity has increased, executive and employee satisfaction is higher, and everyone is spending more time focused on their priorities.

 AXA Eliminates  20,000 Executive Business Trips

When AXA Group looked for a dynamic solution  to leverage the collective knowledge of 214,000 employees worldwide to improve business, they understood the solution would need to be simple and scalable; like Advocate, they chose Cisco TelePresence.

At the beginning of the project, the AXA Tech department identified a few key meetings on which to test the new Cisco TelePresence system, including the annual board meeting.. Board members travel from all around the world to Paris for a 2-hour meeting costing the company a lot of  time and money.

The pilot board meeting was a huge success (to the relief of AXA Tech!)! Greg Medwin, Manager Global Network Design Authority at AXA Tech told us, “Some of the initial responses to our AXA presence [Cisco TelePresence] systems was initially ‘wow, how can we adapt this into our business? The pure immersive  and interactive response, as well as the ability to reach out and communicate with teams globally or on the other side of town is [powerful]’ It was seen as a great benefit by the executives, something immediately accessible and not expensive”

The success with Cisco TelePresence extends beyond the boardroom:

-          Cut travel costs by €100 million over three-year period by hosting 43,000 meetings

-          Reduced CO2 emissions by 240,000 metric tons over same timeframe

-          Simultaneously allowed executives to make better use of their time by reducing the number of trips for executives by 20,000

-          Eliminated 15,000 employee business trips

What is your company doing to bring the boardroom experience outside the boardroom? Share your story!

By Jill Shaul

Tuesday, July 3, 2012

Combating Malware and Advanced Persistent Threats

In the past decade, the security industry has seen a constant rise in the volume of malware and attacks associated with them. Malware are constantly evolving to become more complex and sophisticated. For example,
  •     Unique malware samples broke the 75 million mark in 2011 – Network World
  •     500 malware networks available to launch attacks – InformationWeek
  •     Malware authors expand use of domain generation algorithms – Computerworld
  •     Zeus/Spyeye variant uses peer to peer network model  -  Infosecurity.com
  •     Anonymous promises regularly scheduled Friday attacks – Wired
This blog discusses the changing malware threat landscape, challenges faced by intrusion-prevention systems, and limitations with traditional signature-based detection. We also provide the vision of McAfee Labs regarding effective solutions to combat such advanced threats.

Changes to the Threat Landscape

In the last decade we have seen exponential growth in the number of Internet users worldwide. This expanding base provides a lucrative opportunity to criminal organizations to carry out illicit activities. Compared with earlier malware that primarily created nuisance attacks, today’s malware are much more focused on both their victims and goals. Today’s attacks are a major concern for enterprises and organizations. Not only do they risk the loss of intellectual property or data, but any disruption to business continuity can also severely hamper an organization’s productivity and reputation. Protecting networks with a wide variety of Internet-connected devices—desktops, laptops, smart phones, etc.—has become even more of a challenge.

Botnets are the most common form of malware used by cybercriminals to attack enterprises and government organizations worldwide. Botnets, networks of compromised “robot” machines (also known as zombies) under the control of a single botmaster, carry out malicious activities such as distributed denial of service (DDoS) attacks on servers, steal confidential information, install malicious code, and send spam emails. Recent examples are Operation Aurora, ShadyRAT, and DDoS attacks on payment websites in support of WikiLeaks.

Advanced persistent threats, on the other hand, focus on specific targets, such as government organizations, with motives ranging from espionage to disrupting a nation’s core networks, including nuclear, power, and financial infrastructure. Due to the discrete nature of the attacks, these can remain undetected for a long time. Such attacks are also much more complex and sophisticated compared with other malware.  For example, Stuxnet targeted Iranian nuclear facilities and Flame targeted cyberespionage in Middle Eastern countries.

Challenges

Looking at the significance of intellectual property and national secrets as well as the vast potential of monetary rewards gained through these advanced attacks and threats, more and more cybercriminals—often well funded by criminal organizations—are attracted to develop malware. Their authors implement various techniques to make the malware and associated communication channels stealthier to avoid detection by security products on host systems and on the network. For example, encrypting communications between host and control server, using decentralized network architecture to stay undetected and resilient, using domain and IP flux techniques to hide control servers, and obfuscating malicious payloads are some of the techniques widely used by malware these days.

Traditional Detection and Its Limits

A signature-based detection mechanism that looks for unique network patterns has been the traditional method employed by security vendors to provide protection against attacks.

This method, though effective for defending against known threats, has limits.
  •     It is reactive: To provide coverage, researchers need to monitor and analyze network traffic, and reverse-engineer the attack to provide accurate detection coverage
  •     It is static: Malicious network patterns observed in previous attacks can change frequently, thus making the existing signatures ineffective to detect new variants of old threats
  •     It cannot react to unknown (such as zero-day) attacks
  •     The scope of detection is limited to a single network session and cannot correlate events across multiple network sessions
These limitations severely cripple traditional signature-based detection in protecting against emerging threats.

McAfee Labs

To win the battle and keep customers protected against emerging threats in the future, security vendors must continue to innovate.

Based on the current challenges to and limitations of signature-based detection, McAfee Labs envisions a dynamic solution that can provide proactive protection against future threats.

Such a solution must:
  •     Provide a behavioral-based detection framework in addition to the traditional approach
  •     Be capable of integrating various behaviors of the malware/threat lifecycle
  •     Have the ability to correlate attacks across multiple network sessions to precisely detect a specific type of threat
  •     Have the ability to do event-based correlation across multiple network sessions to detect unknown malware/threats
Such a framework will primarily be targeted toward providing not only detection to known threats but also providing customers with early warnings of possible infections.

In subsequent blogs, we will talk more about the solution that McAfee Labs believes will be capable of combating malware and advanced persistent threats on our networks.

I would like to thank my colleagues Chong Xu and Ravi Balupari for their contributions to this blog.

By Swapnil Pathak