Thursday, June 30, 2011

The Great Transformation

Each and every day, I get asked "what is this cloud thing all about?".

I get asked by customers, partners, analysts, EMC employees, and so on.  Although I've been asked this question for a few years now, my answer keeps getting better.
I think the answer is important.

Too small or limited an answer, and people won't get the enormity of what's going on in our industry.  Too big an answer, and people won't be able to wrap their heads around it.

So let me share with you my current best take on "what is this cloud thing all about?".

A Thought Experiment Think of our traditional and familiar IT industry as a giant and very compliated Lego model.
You've got part of the model that has all the vendors: big ones, small ones, new ones, old ones.
You've got how all their technology goes to market: direct sales force, resellers, integrators, retail, and so on.

You've got the IT organizations within enterprises: their roles, their processes, their positioning, their value proposition.

You've got the people within IT: their skills, roles and value propositions.

You've got the people in organizations who actually use IT to create value: their expectations, their needs.

All wrapped up in one, glorious, complicated Lego model.  A thing of beauty to behold -- for a moment.

Now Throw It On The Floor
That's right.  Pick it up, and throw it on the floor with force.  Bang! 

The individual pieces don't break, but they're now scattered everywhere.

You can still see the componentry and objects that used to comprise the model, but they've lost their association with each other.

There isn't any less IT stuff, it's just rather disorganized right now.

And now we get to repackage and re-assemble everything to build a new (and potentially far better) model out of essentially the same pieces.

Putting It All Back Together.
Infrastructure technology gets re-assembled around virtualization, convergence, IaaS and delivering services.  Certain new technologies are needed (e.g. orchestration, federation), but -- at the end of the day, you've got your familiar server, storage, network, etc. -- just in a different arrangement.

Application technology gets the same treatment -- we still have data management, middleware, application logic, user interfaces, and so on.  Ditto for security, GRC, operations, analytics, etc.
You can still see all the various pieces; they've just changed a bit in how they're constructed and interact with other components.

Users still are using devices to get to their information and workflows -- they're just different devices in a different context.

Nothing really new here, if you think about it.

If you look at it a while, you can see the patterns for how the new pieces will come together.  Sure, there will be debates about the "best" way to build the new model, but -- usually -- we're arguing at the periphery.  The design patterns are turning out to be roughly the same in the new model.

Keep in mind, though, that we're just considering the technology bits.  Indeed, that's what technology people tend to do: discuss and evaluate different technologies, and how they might be assembled together.

My argument is that there's far more to re-constructing our Lego-based IT model around "cloud" than just shiny new technologies.  Important, yes -- but hardly complete.

The Vendors Are Shifting
Any time there's a big disruption in the IT vendor industry, the pecking order changes.  Big vendors have less power and influence, smaller vendors get an opportunity to improve their standing and relevance.

And that's exactly what's happening here.

Personally, as I look at EMC's extended portfolio, I see technologies that are far more interesting and relevant in the new world than they might have been in the old one.  And as you look at different industry players through the new lens vs. the old one, your assessment of their importance changes dramatically.

The Value Chain Is Shifting
Cloud is creating entirely new IT consumption models -- enter the era of the IT service provider.
Instead of enterprises buying and running IT stuff, they can consume the end product of IT as a service: infrastructure, applications, user experience, IT management functions -- you name it, and there's an increasingly attractive way to consume it "as a service".

And the level of investment by SPs in making these new IT services available can be quite staggering to comprehend.

Many of these options weren't really available before, or -- in many cases -- they weren't as attractive as they are today.  The forecast is simple: there will be many more external and internal consumption options, and they'll be increasingly more attractive over time.

Back to our Lego model: you'll still see the same pieces, they'll just be arranged in more attractive ways for you to consume.

The Role Of IT Is Shifting
When IT starts to think of itself as the "internal service provider" for the organization, interesting things start to happen.  First, if IT can't do better than an external service for something, they'll use the external service -- all things being equal.

This means that -- over time -- IT tends to focus on things that make them unique and valuable to the organization vs. simply struggling to re-invent the wheel that's available off-the-shelf through alternative means.  Typically, their value-add shifts to something more important: they understand the business, and what people are trying to get done.

They get more consultative as a result.  They bias for speed and agility vs. building pyramids.  They bring the very best of IT thinking to the big table and position it as a set of capabilities that can increase differentiation, effectiveness and ultimately competitiveness.
It's a beautiful thing when you see it :)

The Role Of The IT Professional Is Shifting As Well
Anyone who's been an IT professional for any length of time will realize that you're only as good as your skills.  Learn new skills that are valuable to the organization (and we're not just talking technology disciplines here!) and you'll do well indeed.

As we re-assemble our industry model of IT around cloud concepts, the "connectors", the people who bring the pieces together and make it all happen -- are realizing that -- yes, they're still very important, but that they'll be creating value in new ways than before.

Stuff that doesn't matter a lot will be either automated, outsourced or consumed as an external service.  Stuff that *does* matter will command a premium in the market.
Exciting -- if challenging -- times indeed.

The Emergence of the IT-Literate Business Professional
There's a dangerous and unproductive mindset you'll sometimes see -- that it's only IT people who really "get" IT.  While that might be true in some situations, it's becoming less true every day.
We have an entire emerging generation of workers and leaders who were "born digital" -- they've been around information technology their entire working lives.

They know what it can do.  They know what it can't do.  And they know what they need and want from their IT organizations.  And, of course, like all smart business people, if they can't get what they want internally, they'll go outside to see if someone can help them.

Simply assuming that non-IT people don't "get it" is asking for trouble.  Yes, I've heard lots of stories of business people doing really dumb things when it comes to IT.  I've also heard just as many stories of IT people doing really sumb things when it comes to IT.

Add your expertise to theirs, and wonderful things can happen.

The New Shape Of IT?
Information -- and information technology -- is becoming ever-more important with every passing day.

There appears to be no shortage of demand of people who want to consume technology and information in ever-more-clever and productive ways.  That's not going to change for the forseeable future.

What is changing fast is the model for how IT is done -- how those pieces are assembled, operationalized and consumed.

And that is what I think this "cloud" thing is all about.

By: Chuck Hollis

Wednesday, June 29, 2011

Why Applications Are Like Fish And Data Is Like Wine

This is not an original title, nor even an original thought.

It was unabashedly lifted wholesale from James Governor's Monkchips website.  And this is not the first time I've done this.

The most recent time, a quick interaction resulted in a philosophical question that still rattles around in my head today: should information be on the balance sheet?  Now that we're better understanding big data IT models, the answer is more decidedly yes.

But this post isn't about big data -- it's about IT philosophy in general, and how it's changing.

To Be Explicit Fish is usually best served fresh and simple: from the boat to your plate with as few intervening steps and wall-clock time as possible.  Best of all when you go to a top-shelf fish restaurant and there are many varieties to choose from that indulge individual palette preferences.

A steady diet of cod, for example, isn't something to look forward to -- even if it's fresh and no matter how many different sauces you put on it.


Trust me on this, I know.

And when it comes to wine, sure -- there are many interesting "young" wines -- but a well-made wine only grows in value over time.  If you're into wine collecting, you'll also appreciate the appeal of vertical flights that illuminates how wines (and winemakers) evolve over time.

Indeed, interesting wines can be much more valuable if they're part of a time series -- and kept a while.  Even if you're not 100% sure how it will turn out.

So, what do these seemingly random observations this have to do with the more serious business of applications and data?

Not to oversimplify, but it's an important discussion.  Most interesting business models seem to be converging to "smart people using data through applications".

So a bit of philosohical meandering might be worthwhile :)

Old School Vs. New School Views On Applications
Big disclaimer: I do not currently live in the end-user application development world.  It's been *decades* since I've written anything meaningful that a user would see.  So the observations here are strictly indirect ones gathered from people who *are* in this world.
Here's what they tell me is starting to matter in their world:
  • Shortening the elapsed time between "there's a need" and "here it is, try it".  If they can take it from months to weeks to days to even hours in some cases, the business value is exponentially increased.  Fresher is better.
  • More modest apps focused on doing a few important things very well vs. lots of things very poorly ("there's an app for that"). No fish stew using leftovers.
  • Thinking in terms of iterating over a wide variety of focused use cases vs. more generic ones.  Variety is good.
Sounds like fish to me.  Freshly served from source to consumption -- end-user apps don't age well in these times.

Give people exactly what they want, when they want it, and don't slather it with a lot of unwanted extras.

Miss the window, and the fish isn't all that appealing anymore.

Old School vs. New School Views On Data
Many, many years ago, I was rather surprised to learn that re-processing historical data with new tools was a big deal in the energy exploration business.  I kind of naively assumed that -- once they'd extracted the value from a data set -- well, that was that.  Sure, they'd keep a copy around, but ...
My brother the geophysicist set me straight: there was a continual progression in new algorithms, more processing power, more recent surveys that complement historical work, new prices for oil, etc. -- all made for a compelling case to "never throw anything away".

Even if you couldn't see the obvious value today.

And, since then, I've often thought that could be the case in many, many situations.  The problem is -- it's hard to look forward and figure out what might be valuable at some future data in some future context.  I often get asked "how do you know what to keep and what to toss?"

I feel rather helpless here -- it depends.

Clearly, anything to do with transactions or events is an obvious choice to keep around.  But what about something ginormous like email with all the embedded docs we send each other?  I think of corporate email as a giant DVR recording the mental processes of a large corporation.
Somewhere in all that corporate jibber-jabber I would think there are potentially useful nuggets worth mining -- someday.  But making tradeoffs against finite resources and real-time requirements is difficult at best.

I started enjoying wine at a relatively early age.  Since then, I've kept a very modest wine cellar for quite a while.  All it takes is a bit of discipline to not drink your entire collection on some random night.

I'm hanging on to certain wines, but not quite sure when I'm going to be drinking them.  I do know that -- as a collection -- they're becoming more enjoyable and valuable -- up to a point, that is.
When I occasionally buy more wine, I face hard choices between what stays in the cellar and what goes up to the kitchen for immediate consumption.
Decisions, decisions :)

The Opposite Of Today's Reality?
In many IT shops I work with, the exact opposite tends to happen.  Valuable data gets routinely tossed as "stale", and lumbering applications are kept around way past their sell-by date.
I know, I know -- there's money involved to do so effectively.  I get that part.

But, I'd argue, there's not much value in stale fish.

Or fresh grape juice, for that matter :)

By: Chuck Hollis

Tuesday, June 28, 2011

New Symantec Research: The Current State of Mobile Device Security

The mass adoption of both consumer and managed mobile devices in the enterprise has increased employee productivity, but has also exposed the enterprise to new security risks. Our latest research is a deep dive into the current state of mobile device security. You can read the whitepaper in its entirety here.

More than anything else, the analysis shows that while the most popular mobile platforms in use today were designed with security in mind—and certainly raise the bar compared to traditional PC-based computing platforms—they may still be insufficient for protecting the enterprise assets that regularly find their way onto these devices.

Today’s mobile devices also connect to an entire ecosystem of supporting cloud and desktop-based services. The typical smartphone synchronizes with at least one public cloud-based service that is outside enterprise control. At the same time, many users also directly synchronize mobile devices with home computers. In both scenarios, key enterprise assets may be stored in any number of insecure locations outside the direct purview of the enterprise.

To get at the heart of this issue and start looking for a solution, the paper takes an in-depth look at the security models employed by two of today’s most popular mobile platforms: Apple’s iOS and Google’s Android. The goal is to better understand the impact these devices have as their adoption grows within the enterprise and share that knowledge. It defines the major mobile threats we’re seeing today—click here (PDF) to see our infographic—and analyzes the effectiveness of each platform’s in-built security features against these threats.

So, what did we find? Overall, our analysis showed that while not perfect, the iOS security model is well designed and has thus far proven largely resistant to most types of attacks. With regard to Android, while we believe its security model is a major improvement over the models used by traditional desktop and server-based operating systems, it’s not perfect either. Specifically, it suffers two major drawbacks. First, its provenance system enables attackers to anonymously create and distribute malware. Second, its permission system, while extremely powerful, ultimately relies upon the user to make important security decisions. Unfortunately, many users are not technically capable of making such decisions and this has already led to social engineering attacks.

It’s important for enterprises to remember that today’s iOS and Android devices do not operate in a vacuum—they’re almost always connected to one or more cloud-based services or to a home or work PC, or all of the above. With that said, when properly deployed, both Android and iOS platforms allow users to simultaneously synchronize their devices with both private and enterprise cloud services without risking data exposure. However, these services may be easily abused by employees, resulting in the exposure of enterprise data on both unsanctioned employee devices as well as in the private cloud.

Thus, it is imperative that enterprises seek to understand the entire ecosystem the devices used by their employees participate in, and then formulate effective device security strategies to mitigate the risk these devices create. This can seem like a monumental task, but reading our whitepaper is a great place to start!

By: Carey Nachenberg

Monday, June 27, 2011

Mac Users, Upgrading to OS X 10.6.8? You Want to Read This First...

We are once again writing to follow-up on our early post related to a similar issue from January.  This time, it’s for Mac OS X upgrades to Apple’s just released 10.6.8 update and PGP Whole Disk Encryption for Macs.

Much like the previous post, Apple’s automated Mac OS X 10.6.8 Software Update mechanism bypasses the protections of PGP Corporation had put around a critical file needed for normal system startup.  This time however, users who are running 10.1.1-Build 10 and newer had no problems with the Apple 10.6.8 update as expected.  Users running older versions, however, ran into problems.

As communicated previously, the PGP Engineering team discovered that the Apple automated Software Update mechanism bypassed the protections PGP built-in to protect the boot.efi file.  This bypass allows the Mac OS X update to overwrite a critical file needed by PGP Whole Disk Encryption when the machine boots, thus rendering the system non-bootable after installation of the update.

Users of PGP Desktop 10.1.1-Build 18 (or higher) did not run into any issues because PGP was able to properly protect the boot.efi file.  Users that were running an older version than PGP Desktop 10.1.1-Build 10 ran into problems because the new mechanism to protect the boot.efi file does not exist in those versions. While build 10 is not affected by some of the Mac update issues, it wasn't until Build 18 that we fixed the issue with a comoo updater for Mac as well.

We recommend that you please upgrade to PGP Desktop at least 10.1.1-Build 18 or higher prior to upgrading Mac OS X to 10.6.8.  This will prevent boot issues from this OS X upgrade.

Latest Knowledge Base article pertaining to upgrading to OS X 10.6.8: http://www.symantec.com/business/support/index?page=content&id=TECH163224

For more information on how to obtain the latest version of PGP Desktop, please visit: http://www.symantec.com/business/support/resources/sites/BUSINESS/content/live/TECHNICAL_SOLUTION/163000/TECH163224/en_US/HOW%20TO%20Request%20PGP%20Desktop%20Service%20Packs.pdf

By: Kelvin Kwan

Friday, June 24, 2011

News Links That 419 Scammers Have Used The Most

When scammers try to gain sympathy from the email readers or to entice them with huge amount of money, they will usually mention a tragedy or, any event that attracted huge public attention. They may also want the users to read additional information, therefore a URL from a well-known news site is also provided. This addition of a link may assure a reader that the email is genuine, and some action needs to be taken in response to the email. Toward the end of the email scam, an appeal to help the victims is made if it is a tragic event. This message will also provide contact information in the form of email addresses, phone or fax numbers.

Anti spam filters will find it easier to block the news URLs in the scam message because, although they are legitimate, these are old news items and should ideally not be in circulation for any reason.

For the sake of curiosity, we went through our active filters to check such news URLs and surprisingly found some of the filters created as early as 2009 still blocking emails. Spam caught for each URL is in the range of 3 million to 9 million messages. For Symantec customers, these are the most abused tragedies or events used inside "419" scam messages with URLs from legitimate news site (indirectly a proof) inserted in the mail body. Most unexpectedly, these filters are still catching spam.

The most abused news links are listed in the descending order of their spam caught:
  1. Foreign currency worth $200 million found in Baghdad, Iraq in 2003
  2. Indian Ocean earthquake in 2004
  3. News on Flight 111 crash in 1998
  4. News on airline crash in 2003
News link on foreign currency worth $200 million found in Baghdad was used the most and it looks like this was the most convincing story to persuade users. The image below shows an example of the email scam.

Iraq war booty email scam

Figure 1: Iraq war booty email scam

Be it a tragic event or a find like in Baghdad, scammers will try to make most of it. They will try to convince users to contact them and may extract money from the recipients. Therefore, email users need to be careful when contributing to a charity organization. Type the website name of an organization directly into the Web browser, rather clicking URLs in the message. Also, when entering personal or financial details, ensure the website is encrypted with SSL by looking for the padlock, https or green address bar. Most importantly, users must never use the contacts provided in the email scams - simply do not reply to scams.

By: Mayur Kulkarni

Thursday, June 23, 2011

Improving Passwords

Troy Hunt, a Microsoft MVP, has done some terrific analysis of the passwords people use. Unfortunately, what has made this possible is the recent trend in hacktivism whereby it is common for hackivists to post the spoils of their attacks online to generate publicity and shame the company being attacked. While this has been bad news for the companies and their customers, it has provided a rich data set for researchers to analyze. The results from Troy’s research are pretty interesting. Rather than rehash the results here, I’ll let you read them yourself: www.troyhunt.com/2011/06/brief-sony-password-analysis.html

What struck me while reading the blog is how much we know about what kind of passwords people create and how little we’ve been able to make practical use of any of this knowledge. Sure we all run off and write blogs about how people need to make their passwords harder to crack. I don’t want to insult anyone’s blogging skills, but so far this hasn’t produced a lot of progress.

I think there is a way we can drive benefit, and better security, from this data. And the responsibility to do that falls back to those of us responsible for creating security solutions. Where it should be.
Here’s the situation: websites all seem to have rules about what characters to use for a password. They have rules about the length of the password. And they enforce those rules. I can’t create a password for the site if I don’t follow the rules. Although these sites ought to make sure these rules are aligned to best practices of length and character usage, this isn't always the case. But that’s not where I see the biggest opportunity. I'm sure they keep the password length low to help prevent forgotten passwords or to keep from just annoying users, so I'll save discussion of those practices for another day.

Here is an easy to implement solution to forcing users to create better passwords: since the account creation program is checking my password for the wrong number of characters and the right mix of numbers and letters, why can’t it check for the use of passwords that hackers have in their database of common passwords?

Here is the list of the top 25 most used passwords from Troy’s research: seinfeld, password, winner, 123456, purple, sweeps, contest,princess, maggie, 9452, peanut, shadow, ginger, michael, buster, sunshine, tigger, cookie, george, summer, taylor, bosco, abc123, ashley, bailey.

I went to a couple of websites and set up new accounts. I created one account using purple (the fifth one in the list above) as a password. The site told me it was a weak password, but let me use it anyway. At another site, it would not allow purple, not because it was a common password, but because it was too short. So back I went to Troy Hunt’s blog. He listed a couple of passwords found in password dictionaries. They were “1qazZAQ!" and  “dallascowboys.” I tried those. I was again told it I was using weak passwords, but because they met length rules the site didn’t prevent me from using either one.

Here’s my proposal. These password dictionaries are not hard to get. Why don’t websites add these as a check, and not allow their customers to use common passwords. Sure, a few Dallas Cowboys fans might not be happy, but they have bigger problems with the team’s recent on-field performance.  Don’t think of it as annoying or limiting customers. Think about it as educating them. Oh yeah, and you’ll be protecting them, too.

By: Kevin Haley

Wednesday, June 22, 2011

A Retrospective "TOuR" of Backdoor.Bifrose

Backdoor.Bifrose first came to our attention in 2004. It is a remote administration backdoor tool that allows unauthorized access to a compromised computer. Once installed, the malware has a range of capabilities, including:  running processes, opening windows, opening a remote shell, stealing system information (such as passwords, and video game serial numbers), generating screen captures, and capturing video from a webcam, among other functionality. While Bifrose has been analyzed in the past, one of the more interesting features of the Trojan has been neglected or overlooked in most write-ups and analysis of the malware: its optional use of the Tor network. Tor, from the overview on their site:
“Is a network of virtual tunnels that allows people and groups to improve their privacy and security on the Internet. It also enables software developers to create new communication tools with built-in privacy features. Tor provides the foundation for a range of applications that allow organizations and individuals to share information over public networks without compromising their privacy.”
After a brief review of Backdoor.Bifrose, below, we’ll describe how the threat makes use of the Tor network using its “hidden services” functionality.

Backdoor.Bifrose is a backdoor detection mechanism for the Bifrost remote administration tool. Bifrost is a fully customizable application, complete with GUI and a basic manual. The configuration options allow a would-be attacker to specify methods of infection, a remote address to attempt to call home to, the Trojan’s installation directory, and some rootkit functionality. This makes common features difficult to predict as they are under the control of the attacker before infection takes place.
Once a computer has been infected, the malware launches Internet Explorer and injects itself into the program’s address space. The malware is now free to communicate with the configured, remote command-and-control (C&C) server without being flagged by system firewalls. The compromised computer will then send some configuration and identification information to the remote attacker, including IP number, hostname, active user, and the version of the client.

When the connection to the C&C server is established, the compromised computer is then under the complete control of the remote attacker. Communication between a compromised computer and the associated C&C server is carried out using an encrypted connection. The Trojan’s C&C servers are typically hosted using a Dynamic DNS, hostname listening on port 81. This predictable pattern of C&C hosts and non-standard IP port made identification of a compromised computer quite trivial based on some cursory analysis of network traffic.

One of the interesting features of the Backdoor.Bifrose Trojan is its modular structure. During infection, the Trojan can install or download the file, “addon.dat.” This file is an encrypted plug-in for the Trojan that provides additional functionality to the attacker, but is not necessary for the basic operation of the Trojan. Thus, the functionality of the malware can be further extended depending on additional plug-ins.

Roughly two years after its initial release, an updated Bifrost was released with some additional, suspicious behaviour. A new plug-in for the Trojan had been developed that, again, could be optionally downloaded or packaged with the threat. The malware’s authors included additional functionality that allowed the malicious C&C protocol to be carried out using Tor routing.

Tor is more generally associated with maintaining client-side anonymity. For example, a user can prevent a remote site from knowing his or her IP address. However, a lesser known feature of Tor is that it enables server-side, or receiver, anonymity. This functionality is known as hidden services.
The use of the Tor network as a communication medium for Trojans is a novel idea and adds an extra layer of stealth and security to the Trojan. The communication method is the same as before, an injected thread in Internet Explorer, but now the Trojan can attempt to call back to a C&C server using Tor’s Hidden Service Protocol.

Bifrose Trojan calling back to its C&C server using Tor’s hidden service protocol
Figure 1: Bifrose Trojan calling back to its C&C server using Tor’s hidden service protocol

Tor’s hidden services allow users to offer Internet services while remaining anonymous. This is done using internal .onion hostnames. These domains are not actual top level domains (TLDs), but are internal domain names for the Tor network and are only routable from within the Tor network. An .onion domain name is generated on the computer that wishes to provide the hidden service, for example a hidden Web server.
An example of Tor's hidden services
Figure 2: An example of Tor's hidden services.

A unique .onion hostname is generated for the hidden Web server (e.g.: 1aqqwrr3444abtsa.onion). Once the hidden server is connected to the Tor network, it can now be accessed through the .onion hostname and begin to accept connections and provide services anonymously. This type of behavior is very useful from a Trojan’s perspective as it provides a secure communication method while keeping the remote server anonymous. A further benefit for Trojans using Tor routing is the inherent encryption required to use the Tor network. This increases the difficulty of analyzing the communications between the compromised computer and the remote server.

The Tor plug-in for Backdoor.Bifrose requires an .onion hostname to be hardcoded into the Trojan at build time. Once a computer is infected with the Tor version of the Trojan, an injected thread within Internet Explorer attempts to call the Tor-related functions of the plug-in. The Tor plug-in contains the following functions that allow the Bifrose Trojan to use Tor routing: torInit, torConnect, torRead, torWrite, torClose, torShutdown. The Trojan can now operate as normal using the plugged-in Tor functionality to obfuscate communications and preserve the anonymity of the remote C&C server. This behavior also frustrates attempts to block the remote connections at firewall level as no IP or unusual remote port was used.

Since 2004, the popularity of Bifrost has dropped significantly, the remote administration tool is no longer actively developed by its original authors, and the Tor plug-in no longer works. While we have seen a number of private builds, identifiable by their version numbers, it seems the Trojan is reaching the end of its lifecycle. It remains as one of the first examples of a Trojan actively using the Tor network to obfuscate its remote communications.

Symantec currently detects this family of Trojans as Backdoor.Bifrose. Symantec recommends that you keep your definitions and signatures up to date to ensure protection against threats mentioned in this blog.

Thanks to Gavin O’Gorman for his input on this blog.

By: Cathal Mullaney

Tuesday, June 21, 2011

Bitcoin Infostealer Falls Prey to W32.Induc.A

The case about the Bitcoin Infostealer is getting funny: we blogged about a business analysis on Bitcoin Mining, and we also blogged about malware designed to steal bitcoins from unsuspecting users (Infostealer.Coinbit).

Now we have found two more samples of Infostealer.Coinbit that are showing some evolution.
What is interesting about these new samples? 

First of all they seem to be from the same author as the previous sample that we blogged about - the binary executables are very similar in structure, and they also have the same strings:



Figure 1: Old vs. new – a comparison of strings dumped from different samples
The samples have the same (or slightly different) email account information where they will submit the stolen bitcoin ewallets.

Second, they show some familiar data:


Figure 2: Part of the infection code of W32.Induc.A

Do you recognize this piece of code? We have already seen it in W32.Induc.A! It is a worm that infects Delphi source code files (not executable binaries), so this means that the author of Infostealer.Coinbit was himself infected with W32.Induc.A. When he compiled the Delphi executable of the Infostealer, the Induc infection code was also included in it (note that the original sample that we blogged about was not infected by Induc).

Interestingly, we have found all these samples (infected and clean) through Virus Total, and all of them were submitted to Virus Total on the same day (June 15), which according to the bitcoin forum is the day the Infostealer began to spread.

One possible explanation could be that the author developed the Infostealer without knowing he was infected by Induc, then when he submitted it to Virus Total (to check for potential AV detections) he realized his computer was infected and cleaned it, leading to the final binary that was not infected by Induc and that was released in the wild. It may also be possible that the source code was in the possession of different people (some infected, some not). These are just theories of course; we don’t know what really happened.

Furthermore, the account passwords are left in the Infostealer executable in cleartext, ready for anyone to sniff them, and maybe this is why in one sample we can find a message from the author:



Figure 3: I think this roughly translates to “If you are looking for it, stop and go mine your bitcoins, or else I may get you the next time”

This message maybe the result of the author previously having his account hacked and his data stolen. Despite the author’s menace, forum users on Bitcoin.org may have already tracked him down, as is suggested in this forum posting.

All these samples are detected by our latest definitions, so we advise our customers to keep their AV definitions up-to-date, and to take precautions when managing bitcoin data.
Thanks to Peter Coogan for his input on this blog. 

By: Andrea Lelli

Monday, June 20, 2011

The Last Horcrux Brings More Spam

Harry Potter and the Deathly Hallows - Part 2 is the last movie of Harry Potter novel series and is being released globally on July 15. The movie has another few weeks before it appears in theaters and it has already become a hot topic for spammers. Symantec reported similar spam activity previously for Part-1 in the blog Harry Potter and The Deadly Hallows of Spam.

In the spam sample below related to the new release, spammers are offering free tickets to Part 2. The message says the offer is valid only in the U.S. and that there are limited supplies of the tickets. The email header shows an example of header spoofing, whereby the email purports to originate from the official Harry Potter site. “From: "Movie Tickets" resolves to “harrypottermovie@removed_address”

Harry Potter scam email

Figure 1. Harry Potter scam email

In the past, Symantec has observed spam promoting the Harry Potter novels and accessories at discounted rates, as well as 419 and online pharmacy scams invoking Harry Potter (see this blog, for example). The goal of these spam campaigns is to harvest personal and financial information.
Because Harry Potter fans are excited to find out what will happen in the final installment, we expect that spammers will continue to distribute more and more Harry Potter spam leading up to the final film's release since this is their last great chance to exploit the Harry Potter magic.

By: Samir Patil

Friday, June 17, 2011

Spammers Offering Fake Gifts for Father’s Day

This year, Father’s Day will be celebrated on June 19th. Of course, this is an occasion that is used to express feelings towards dads for all of their love and support, often accompanied by the giving of exclusive gifts. Sadly, spammers don’t forget to send out their fake offers to target this special day. Symantec is observing an increase in spam volume related to this event, which is shown in the graph below.




Father’s Day spam can be categorized into hit-and-run spam promoting fake products, e-cards, dating, and gift card spam. Various product promotions are seen to contain products such as cigars, replica watches, wallets, and computer accessories. Once a user clicks on a fake offer, they are directed to a webpage where they are asked to divulge confidential information such as a credit card number, CVV, email address, etc. Below are some examples of this type of spam email message with fake offers:



Here are some various “Subject” lines used in this latest spam campaign:

Subject: We Have What Dad REALLY Wants for Father's Day
Subject: Save 80% on the Perfect Father's Day Gift!
Subject: Affordable Father's Day gift that Dad will love!
Subject: Premium Cigars - Perfect for Father's Day!
Subject: Personalized Gifts for Father's Day
Subject: Father's day should send what gift? Holawatch bring your love to your father.
Subject: Don't Forget Dad this Fathers day
Subject: Help Dad Protect His Hard-Earned Money!


Spammers will always try to take advantage of unwitting users by providing fake offers for buying products, but only after the user has entered personal information. Users should not click on any suspicious links received in unsolicited email messages and should always determine the legitimacy of the email and offers. When it comes to ordering a product online, users need to be very attentive towards the authenticity of a website. Websites that sell such products and ask for financial or personal information should be protected by SSL certificates and provide visible trust marks for verifying their authenticity. Keep the basics of online transactions in mind when buying the perfect gift for your dad. Happy Father’s Day!

Note: Thanks to Anand Muralidharan and Azam Raza for their contributions to this blog.

By: Samir Patil

Thursday, June 16, 2011

New Challenges In IT Finance?

Cloud -- in any of its forms -- is about changing the way you do IT.

No part of the IT organization appears to be able to fully escape the transformation to an IT service delivery model.  Indeed, in this blog, I've talked at length about many of these changes: key roles and skills, organizational models, even the relationship between IT and the business.
But one important area has yet to be discussed: IT finance -- the way that organizations fund their IT investment.

And there are some interesting philosophical challenges shaping up here.  Just to be clear, I don't have any easy answers this time :) 

In A Nutshell

At an oversimplified level, this is the classic "irresistible force meets immovable object" scenario.
The "irresistible force" is increasing demand for variable IT services.  Delivering IT as a service makes IT easier to consume; and, of course, anything that's easier to consume will cause much more of it to be consumed.

Indeed, there's plenty of anecdotal evidence that shows -- once friction is largely removed -- there's usually a bottomless demand from the business than involves more IT stuff.

This is not entirely a bad thing, if you think about it.  The business is using these newer efficiently-delivered services to do all sorts of valuable and useful things -- like make money.

All is well, until you fully contemplate the "immovable object".  Businesses, in general, like predictable costs.  That includes things like headcount, R&D, and -- of course -- overall IT expenses.
As seasoned managers know, budgets are set far in advance, and it's not pleasant to try and adjust them upwards mid-course.  More frequently, changing business conditions means that you end up getting your budget cut mid-flight.  When the CFO is looking for quick cost savings, two groups tend to get routinely hit first: marketing and IT.

That's how the world mostly works.  And it's not going to change just because we're all moving to a cloud model.

The Game Has Changed

In the familiar physical IT world, things were somewhat simpler to understand.  You, as a business user, had "your" applications which of course ran on "your" physical infrastructure.

Most cost allocations, as a result, were pretty straightforward.  The business was charged for the specific physical and software assets associated with "their" applications, as well as some shared allocation for facility, labor, etc.

Compare that with the new world of variable infrastructure consumption.  Topics like transparency and chargeback appear relatively straightforward compared to considering the inevitable scenario when demand exceeds supply.

Who gets priority -- and who doesn't -- from the new shared "power plant"?  Especially when there's not enough to go around?

Remember, in the old world, everyone had their own little puddle of dedicated-yet-inefficient infrastructure.  If you needed more performance or capacity, you took out your checkbook and bought a bigger/faster bucket of infrastructure.   If budget cuts were looming, "your" infrastructure was relatively safe, since the money was largely already spent.

In the new world, it's easy to see that annual IT infrastructure funding model might not be able to keep demand for variable infrastructure to support different aspects of the business.

And There's More

As this announcement from VMware reflects, we're starting to see new pricing models for software that assume a variable consumption model for software licenses -- so it's not just infrastructure stuff that's in play going forward.

And, if you read the subtext carefully, you'll see the word "elasticity" starting to be applied to more aspects of software.  The emerging model here is different in a very important way that needs some serious consideration.

Today, most resource allocations for, say, an database task are rather static.  Administrators assign fixed amounts of virtualized memory, CPU and perhaps I/O.

In the new world of "elasticity", a busy database task can request more infrastructure resources dynamically -- and, ostensibly, release them when no longer needed.

Forget, just for a moment, the potential challenges associated with simple pedestrian IT concerns (such capacity planning!) in this emerging world.  Instead, imagine a rather important business application that decides it needs more resources right now -- and imagine that involves de-prioritizing other tasks on the same infrastructure right now.

Like perhaps your personal VDI session, for example :)

Welcome To The New World Of Resource Prioritization

The advantages of building IT to deliver shared services from a dynamic shared pool are impossible to ignore.  We're not talking simple cost savings here; we're talking about agility and responsiveness -- something that every business leader craves.

The upshot will be that many IT leaders will be drawn into a relatively new and messy world of prioritizing business needs against available resources.  In one sense, this is nothing new for the IT team.

What will be new is the frantic pace at which this dynamic prioritization is likely to demand.
Financial pressures will always incentivize an IT organization to run at ever-higher levels of efficiency against their shared infrastructure, which means that there will be less in reserve for the predictable "surprise".

But there are signs of hope ...

For one thing, the recent availability of compatible infrastructure from external service providers means that you can rent extra capacity -- quickly -- if needed.  That's a helpful development, if you think about it.

And, of course, as technologists we can imagine new variations of management tools that will help us understand and predict the various demands against the new shared infrastructure (think analytics), create categories and rules for who gets what resources under different set of circumstances (think policy management) and report back to the business the actual service levels delivered.
One thing won't likely change, though.

Getting the business to pay for the IT services they need and want -- well, that's still going to be a challenge for the foreseeable future :)

By: Chuck Hollis

Wednesday, June 15, 2011

The Growing Appeal Of Compatible Infrastructure

The seductive logic of a hybrid cloud approach to infrastructure is becoming more apparent to IT organizations everywhere.

Build an internal private cloud for the pieces you want to keep inside.  And turn to external service providers for the bits that are done better by someone else.

And the advent of highly compatible infrastructure at both ends of that conceptual wire is turning out to make the hybrid proposition even *more* attractive.

Leveraging The Growing Ecosystem Of Compatible Service Providers
Let's assume, for the moment, that you're building a fully virtualized private cloud behind your firewall.  A Vblock-type approach has its merits, of course, but there's a bigger picture than just what you're doing inside the data center.

As more and more service providers are deploying Vblocks, an interesting option manifests itself -- the ability to rent the exact same infrastructure from a compatible service provider vs. using internal resources.

And that's turning out to be darn attractive in a way I hadn't fully understood until recently.

It's More Than Just The Technology Components

I'm sure a lot of people reading this will say "sure, but as long as both sides are running VMware, isn't that all you need?".  Well, yes and no.

If both enterprise customer and compatible service provider are using a Vblock, performance expectations are roughly similar.  An arbitrary virtual machine (two cores, 16GB of RAM, 100GB of disk) will perform the same on an "owned" Vblock vs. a "rented" portion of a Vblock.

Very convenient when doing sizing back and forth.  

You really can't make the same claim if that arbitrary virtual machine is moved to, say, a different server architecture and a different storage array an a different I/O subsystem.  The standardization inherent in a Vblock means that you'll get largely the same performance experience wherever the workload goes.

Consider second-level support for just a moment.  An "owned" Vblock is supported by VCE, which is in turn supported by the parent companies.  A "rented" Vblock is supported in the exact same manner.  Put differently, both ends of the wire are supported by the exact same support structure.

Another benefit, if you're trying to make your life more simple …

There's more.  For example, the ability to use GRC frameworks like Archer to provide the same controls on both sides of the wire.  Portals like Data Protection Advisor to ensure that all data is being protected, regardless of whether it's here or there.  UIM for infrastructure provisioning.  And so on.
Depending on the service provider and your particular choices, it's highly likely that the management and orchestration layers will be highly compatible with your internal choices as well.

And that's *before* we start moving workloads back and forth non-disruptively with VPLEX :)

Flexible Consumption Options Proving To Be A Boon For Customers

This "flexibility of consumption option" is proving to be extremely useful in a variety of pragmatic customer situations.

Here's one: customer wants to run a Vblock internally, but has to wait until the next budget cycle to pay for it.  No need to wait; get a head start on the environment by simply renting compatible assets from any one of a number of compatible VCE-based service providers.

When the budget for the "owned" Vblock eventually shows up, workloads can easily be moved back from the service provider with a minimum of hassle.

Here's another: customer sizing estimates for their new Vblock are varying all over the place, since future workload requirements are basically an educated guess.  The logical answer?  Simply buy a modest Vblock for the workloads that are well-understood, and use compatible service provider infrastructure for any potential overage.

No need to super-size your Vblock when there are plenty of external options.  Unless you want to, that is :)

Here's yet another: a company with global operations wants to shed the expense of operating multiple data centers around the world, yet is concerned that applications need to be close to the people who use them.  The approach?  Consolidate and centralize the workloads that are amenable to doing so; use rented Vblock infrastructure where a small footprint is needed to keep the application close to users.
Same infrastructure, same management, same expectations, same support, etc. on both sides of the wire.  How convenient!

Here's The Point
Infrastructure traditionalists often bristle a bit at the Vblock approach with its strict approach to design and configuration.  However, the structured approach is turning out to pay back even more benefits that we originally surmised.
  • Sizing, configuration, order and delivery takes less time
  • Getting the system into production takes far less time
  • Performance and related characteristics are well defined and well understood
  • Multi-level support comes from a single, integrated source.
  • Releases and patches for the entire infrastructure are pre-integrated and pre-tested
And now, as more service providers are standing up Vblocks, there's a strong interest in flexible consumption options: buy, rent, or any combination at any time.

Few, if any, of these benefits result from the more common "reference architecture" approach.  The Vblock is a product; built and supported by VCE.  Its standardization is its strength.

The Service Provider Angle

A similar story plays out for the service providers who are using Vblocks to build their businesses.  Not only do they get a leg up on operational costs vs. traditional approaches, but they also get 'branded infrastructure" that's becoming well known to more and more sophisticated IT organizations.
Imagine you're a service provider, and a prospective customer asks you "what will my application be running on?".  If the answer is "a Vblock", it's likely to be a short and satisfactory conversation indeed.

Otherwise, you'll have some lengthy explaining ahead of you ...

Kicking The "Build It Yourself" Habit

There are more than a few IT organizations that are capable of building their own Vblock-ish approach.  While I don't argue their capabilities, I do argue the rationale for doing so: wouldn't it be better for the IT team to work on things that deliver unique value vs. re-inventing the wheel?
To that argument, I can now add another powerful one: insisting on building and maintaining infrastructure using a traditional model will severely limit your consumption options.  There won't be a ready supply of service providers waiting in the wings who are running infrastructure just like yours.
And that's turning out to be a very big deal indeed, especially in the upper levels of IT management.  Having the option to rent vs. buy is tough one to give up.

The Power Of Standardization

Old habits die hard, and the practice of hand-crafting IT infrastructure will likely fall into this category.  We have an entire generation of IT professionals who've been taught how to select, integrate and support IT infrastructure components.

Vblocks (and their ilk) change this fundamental assumption.  Customers and service providers have seen the advantages of the integrated approach.  There's no turning back now.

Relatively quickly, the Vblock has become almost a unit of IT infrastructure currency: well understood, and generally accepted as proven and valid in most IT circles.

By: Chuck Hollis

Tuesday, June 14, 2011

When Are You Adding a CVO to Mahogany Row?

What’s a CVO?  A Chief Video Officer. A senior officer in your bank who is responsible for developing your video strategy, executing on that strategy and measuring and reporting its results.
Why would this even be a consideration? Consider this, by 2013 92% of all Internet traffic will be video. Video is becoming more pervasive. It is not just YouTube and NetFlix. It is now pervasive throughout financial services.


The uses of video stack up very quickly: most obvious is the growth in digital signage in retail branches. Now those advertising screens are morphing to interactive touch screens and full blown video walls. There is video on almost all websites to promote products and services and to build financial literacy. Internally, both live streaming and on-demand video is used in training. Corporate executives can now live broadcast to all employees.

Virtual concierge services are appearing—this is where self-service branches use life-size, high-definition video to greet customers when they request help. Now, with video and collaboration technology banks are staffing branches with remote experts and collaboration stations—with this capability banks can have wealth managers, small business lenders and mortgage lenders available in every branch via video. Why Remote Experts? To deliver a high touch experience, to deliver the service and knowledge at the customers convenience, and to make the best productive use of experts’ time.

And have you considered video surveillance? In the branches. At ATMs. In corporate offices. In contact centers?

Without a CVO you run the risk of reinventing the wheel and solving the same problems across multiple operations. A perfect opportunity for silo building. Video is multichannel. So, without a consistent strategy, you run the risk of delivering inconsistent experiences to your customers and employees. Speaking of risk, what is your compliance policy when video interactions need to be recorded? Is it the same as audio? What about storage? What search capabilities are available for unstructured content? Hmmm.

As you write the job description for your CVO, remember that it is not all about technology, but also about people and process. And looking good on camera, while a nice to have, is not a show stopper.

By: Leni Selvaggio

Monday, June 13, 2011

Puddles

I believe that we have reached a saturation point.  You know how, after heavy rain, the ground can’t absorb any more water and it begins to pool on the ground? We’ve reached that point with security incidents.
 
The bad guys just can’t pump out new malware any faster. Check out the Norton Cybercrime Index.  The trends for 2011 are pretty much flat. The explosive growth in malware we’ve seen in the previous 10 years is just not sustainable. Maybe new hacker tools will come along, new propagation methods, or more platforms, or more people to infect.  But for now, things are beginning to stagnate.  
 
This is not to say the problem is going away.  There were 286M new malware variants in 2010. 286 million! But even that mind-blowing number reflect a slow down.  It’s more than the year before, but not the 100% increase we've reported in previous years.  It’s not like the growth we use to see.
 
So how to explain the nearly endless parade of security incidents we've seen in the last few weeks?  Well, in some ways, these are the puddles forming on the ground.  It’s not that rain has gotten harder, it’s just that the ground has stopped absorbing them all.  Some of what we are seeing does reflect the bad guys attacking new platforms and finding new people to infect.  But it’s mainly puddles.  And the fact that many of these incidents show how much higher the stakes have become.  
 
Before declaring a trend one way or the other, it's worth understanding the types of security incidents we’ve been reading about in the last few weeks.  While there have been a lot of incidents, they are not all the same.  What we’ve seen these past few weeks break down into three well-known categories: massive attacks, targeted attacks and hacktivism.
 
Massive attacks - Fake AV has been around for years. It remains the most popular type of massive attack.  At $49.95 per victim it’s a profitable business.   News coverage here does not reflect a major increase in these attacks; it reflects the novelty of these attacks now being directed at Macintosh computers. 
 
It’s called a “massive attack” because the bad guys are trying to infect as many people as possible.  They know only a small percentage will fall for their scam, so the best way to increase profit is to increase the number of computers targeted. In their search for new targets, eventually these crooks were going to start looking at the Mac. So the appearance of fake AV on Mac was inevitable.  If you were shocked when this happened you should prepare yourself.  These things will be showing up on mobile phones next.
 
Targeted attacks - Hardly a new occurrence.  But two events in 2010 started to increase the conversation about targeted attacks.  The first was Stuxnet.  The second was the phrase advanced persistent threats.  I’m pretty ambivalent about the term APTs.  The phrase has certainly captured people’s imagination and if it makes it easier to have a conversation about security because of the phrase, I’m all for it.  But the majority of the attacks being labeled APT are frankly not very “advanced” and often not that “persistent.”  “Targeted attacks” may be harder to create an acronym from, but it’s a better description.  Take the recent compromise of webmail accounts that was widely reported on in the media..  It certainly wasn’t an advanced type of attack; it was spear phishing.  There wasn’t even malware involved. What it was, was targeted - and that’s what got our attention.  That, and the fact that the affected company told us what happened.  Credit to Google.  They seem to have started the trend in 2010 with Hydraq, of companies talking publicly about attacks targeted at them.  This has benefited us all.  They’ve built awareness about these types of threats and allowed security companies to have meaningful conversations with their customers about targeted attacks.  It’s no longer a discussion about the theoretical.  The real risks of security incidents are now a lot clearer to businesses.
 
So the trend here is not an increase in targeted attacks, but an increase in companies willing to talk publicly about them.
 
Hacktivism - Crunch together the words hacking and activism and you get hacktivism.  My spell checker hates this phrase almost as much as I do.  But, until a better one comes along, it will have to do. The phrase was created in 1994; it’s been going on a lot longer than that.  A hacktivist’s main form of expression used to be in defacing webpage, spamming and the occasional DDoS (distributed denial-of-service) attack. 
 
The last major example of this was a DDoS attack targeting payment processors, online retailers and others.  This happened last  December in protest against sites that stopped handling transactions for Wikileaks.  The DDoS attacks were generally considered ineffective, but I think they were a major success.  They may not have shut down any site for any significant period of time.  But they generated an enormous amount of publicity.  And isn’t that really the goal of hacktivism?
 
So, if there is any type of security incident seeing a significant rise, it would be hacktivism.   The group responsible for the December incidents has since moved on to another highly publicized attack, breaking into a security company and posting all their email online.  Now a multinational gaming and entertainment company has felt the sting.  User passwords were stolen, but not for profit.  They were posted on line to generate publicity.  And this has worked brilliantly.  It’s worked so well that other hackers jumped in and launched their own attacks against the same company.  These created new news, which encourages other hackers to… It’s a vicious cycle.
 
So, is the threat landscape worse than before?  Yes.  But, we’ve been saying that for years. It’s reached the point of being a cliché. What’s new is that there is greater visibility to these threats.  The good news is that these events are finally getting the attention they deserve.  The bad news is that these incidents make clear the stakes are higher than they’ve ever been before.  

By: Kevin Haley

Friday, June 10, 2011

Fake Donations Continue to Haunt Japan

A couple of months ago, Japan was hit by an earthquake of magnitude 9.0. The earthquake and tsunamis that followed caused severe calamity to the country. Phishers soon responded with their fake donation campaign in the hopes of luring end users. Unfortunately, it seems that the phishers are continuing to use these fake donations as bait in a recent phishing attack we observed.

In a fake donation campaign, phishers spoof the websites of charitable organizations and banks and use those fake sites as bait. This time, they spoofed the German page of a popular payment gateway site with a bogus site that asked for user login credentials. The contents of the page (in German) translated to “Japan needs your help. Support the relief efforts for the earthquake victims. Please donate now.” The message was provided along with a map of Japan that highlighted two cities from the affected region. The first city shown was the one near Japan’s nuclear power plant, Fukushima,  and the second was the capital city, Tokyo. The map also showed the epicenter of the earthquake located undersea near the east coast of Japan.

Upon entering their credentials, users are redirected to the legitimate website where they continue their activity, unaware that they have provided their valuable login information to phishers. Because the login credentials in question are for a payment gateway site, the account is linked to users’ money by means of credit cards or bank accounts. If the users have fell victim to the phishing site, phishers will have successfully stolen their personal information for financial gain. The phishing attack was carried out using a toolkit that utilized a single IP address, which resolved to four domain names and was hosted on servers based in France.

Internet users are advised to follow best practices to avoid phishing attacks:
•    Do not click on suspicious links in email messages.
•    Avoid providing any personal information when answering an email.
•    Never enter personal information in a pop-up page or screen.
•    Frequently update your security software, such as Norton Internet Security 2011, which protects you from online phishing.

By: Mathew Maniyara

Thursday, June 9, 2011

Spear Phishing in Google’s Pond

Francis deSouza - Group President, Enterprise Products and Services, Symantec

Earlier this week, Google posted a blog stating that the personal Gmail accounts of numerous users, including senior US government officials, Chinese political activists, officials in several Asian countries (predominantly South Korea), military personnel, and journalists had been attacked. Google said a campaign to obtain passwords appears to have originated in Jinan, China and was aimed at monitoring the contents of these users' emails, with the perpetrators apparently using stolen passwords to change people's forwarding and delegation settings. Google confirmed that it detected and disrupted this campaign and has notified victims and secured their accounts. They have also notified the relevant government authorities.

These attacks appear to be an example of “spear phishing.” Spear phishing is an email that appears to be from an individual or business that a user knows, but it isn’t. It’s from the same criminal hackers who want your credit card and bank account numbers, passwords, and the financial information on users’ PCs. At its heart, spear phishing is simply a targeted attack.

Symantec has noted a continuous increase in targeted attacks, including spear phishing. In fact, the April 2011 MessageLabs Intelligence Report, published by Symantec, revealed that the number of targeted attacks intercepted by Symantec.cloud each day rose to 85—the highest since March 2009, when the figure was 107 in the run-up to the G20 Summit held in London that year. While some high-profile targeted attacks in 2010 attempted to steal intellectual property or cause physical damage, many of these targeted attacks preyed on individuals for their personal information.

Spear-phishing attacks can target anyone, and while the high-profile targeted attacks that received a high degree of media attention (such as Stuxnet and Hydraq) attempted to steal intellectual property or cause physical damage, many of these attacks simply prey on individuals for their personal information. Such was the case with the recent events surrounding Google’s Gmail.

The spear phisher thrives on familiarity. They know their target’s name, email address, and at least a little about them personally. The salutation on the email message is likely be personalized: “Hi Bob” instead of “Dear Sir.” It may make reference to a “mutual friend” or to a recent online purchase you’ve made. Because the email seems to come from someone the target knows, they may be less vigilant and give them the information they ask for. And when it’s a company they know asking for urgent action, they may be tempted to act before thinking.

How do people become targets of a spear phisher? The answer is simple: from the information users put on the Internet from their computers and smartphones. For example, they might scan social networking sites, find a user’s page, their email address, their friend list, a recent post by them telling friends about the cool new camera they just picked up from an online store, or a page about someone giving a presentation on a new ground breaking technology. Using that information, a spear phisher could pose as a friend, send the target an email, and ask them for a password to the user’s photo page. If the user responds with the password, they’ll try that password and variations to try to access their account on the online shopping site they bought the camera from. If they find the right one, they’ll use it to run up a nice tab for you. Or the spear phisher might use the same information to pose as the online shopping site and ask the user to reset their password, or re-verify their credit card number. If they do, the spear phisher will then do them financial harm.

At the end of the day, these kinds of attacks are often highly targeted and prey on the susceptibility of individuals. Symantec recommends the following best practices for protection against targeted phishing attacks:

Do
•    Unsubscribe from legitimate mailings that you no longer want to receive. When signing up to receive mail, verify what additional items you are opting into at the same time. De-select items you do not want to receive.
•    Be selective about the websites where you register your email address.
•    Avoid publishing your email address on the Internet. Consider alternate options; for example, use a separate address when signing up for mailing lists, get multiple addresses for multiple purposes, or look into disposable address services.
•    Use strong passwords or two-factor authentication, such as Symantec’s VeriSign Identity Protection, that requires something you know and something you have.
•    Only enter personal and financial details on a website that is protected with an SSL certificate. Look out for the padlock, https, or the green address bar. Using directions provided by your mail administrators, report missed spam if you have an option to do so.
•    Delete all spam.
•    Avoid clicking on suspicious links in email or IM messages because these may be links to spoofed websites. We suggest typing Web addresses directly in to the browser rather than relying upon links within your messages.
•    Always be sure that your operating system is up to date with the latest updates, and employ a comprehensive security suite.

Do Not
•    Open unknown email attachments. These attachments could infect your computer.
•    Reply to spam. Typically the sender’s email address is forged, and replying may only result in more spam.
•    Fill out forms in messages that ask for personal or financial information or passwords. A reputable company is unlikely to ask for your personal details via email. When in doubt, contact the company in question via an independent, trusted mechanism, such as a verified telephone number or a known Internet address that you type into a new browser window (do not click or cut and paste from a link in the message). Only enter personal information when you initiate the session.
•    Buy products or services from spam messages.
•    Use the same login and password across multiple websites.
•    Open spam messages.
•    Forward any virus warnings that you receive through email. These are often hoaxes.

By: Francis deSouza

Wednesday, June 8, 2011

Droid Dreams, a Reoccurring Nightmare for Android Users

Android.Lightdd (the name is derived from the presence of the additional Trojanized package ending in the word ‘lightdd’) has been dubbed as the follow up to Android.Rootcager AKA Droid Dreams, one of the first threats seen in the wild that attempted to use an exploit to root an Android device. Although the original reports on the discovery of the threat called out five accounts, Symantec has found additional publisher accounts under which apps were repackaged (at the current time all of these account  have been disabled).


The key point to note is that even though the news of the return of ‘Droid Dreams’ has created a bit of a stir with approximate high download rates being quoted - due to the fact that the threat was available through official channels - unlike its predecessor, this threat does not carry out any system level exploits and does not require the infected user to carry out any complex steps to restore the device back to the pre-infection state.

This threat follows a very formulaic pattern. In addition to containing the malicious code base, which runs as a service called ‘CoreService’, the repackaged app also contains a configuration file ‘prefar.dat’ (included in the assets folder in the apk file). The contents of which (encrypted in DES) are three urls, which the threat uses to establish the malicious host to contact. At this point in time, all three hosts are offline.


Once a connection to the host is made, the threat is capable of sending back device specific information related to the infected host. Information includes the Model, Language, Country, IMEI, IMSI, OS Version, etc., as well as the list of package names installed on the infected device. The one unique field hard coded into each package is the product ID, which we believe is being used to distinguish the package that was responsible for the infection.



At its core, Android.Lightdd is a  downloader Trojan, but with certain caveats. The threat is subject to the Android security model, therefore any download attempts will not work, as long as the user does not consent to the installation of the suggested app.

There are four possible ways an infected host could be recommended to download an additional payload. Links could be presented as “Market Prompts”, “Web Prompts”, “Update Prompt” and a “Download Prompt”. Minor variances separate the different methods, and suffice to say, the end results are the same.  The more concerning part is that Android Market place links were being used as download repositories.

What still remains a bit of a mystery and subject to speculation is the bigger picture. All apps discovered until now contained more or less the same base code, i.e. they all were downloaders. But what is the point of that? Information harvesting, followed by the downloading of additional downloader, doesn’t really add up. Or was it to download additional threats with more advanced features later on?

If you suspect that you might be infected by this threat (e.g. spotted a running service in the background called “CoreService”), download the latest update to Norton Mobile Security and do a full scan to protect yourself.

By: Irfan Asrar

Tuesday, June 7, 2011

How Safe is Your Password?

I received reports this week of emails that reference transactions of which the recipients have no knowledge. The  email includes a link for more detail, which then attempts to download a ZIP attachment. Nothing new here; most savvy users would know better than to open an attachment in an unsolicited email.

The interesting thing about this email, however, is that it includes a password previously used by the recipient. Seeing private data in an email like this would definitely raise suspicions that the sender has some kind of connection to the recipient, or worse, has comprised their account details. The ultimate goal for the sender is that the user’s curiosity would be piqued sufficiently to open the attachment which would, of course, deliver the inevitable malware payload.

Symantec detects the file as Trojan.Zbot, also called Zeus, which is a Trojan horse that attempts to steal confidential information from the compromised computer. It may also download configuration files and updates from the Internet. It specifically targets system information, online credentials, and banking details, but can be customized through the toolkit to gather any sort of information.
So how did these scammers get the passwords? It seems fairly certain that a Web site database has been comprised. A number of sources on the Internet believe it was a major international social gaming Web site which is now most popular in Asia.

The text of the email is as follows:

Dear customer, [password redacted].

Your order has been accepted.

Your order reference is 61035.

Terms of delivery and the date can be found with the auto-generated msword file located

at:

http://[domain redacted].com/Orders/Orders.zip?id:00835996Generation_mail=[email

address redacted]

Best regards, ticket service.
Tel./Fax.: (224) 760 90 618

A number of different sites have been hacked over the past month with similar patterns in the malware link, although it is only this week that the social engineering element has been reported. The variants we saw were sent from disposable, free webmail accounts.

If you believe you may have been compromised, run an antivirus scan and change any important passwords. Check your bank accounts for any suspicious transactions, if in doubt.
It is always a good idea to use different passwords for each site where you register details – for some handy tips on how to manage this, take a look at some password creation methods from our colleagues in Security Response.

In addition to using unique passwords for each site, other best practices are listed here to avoid risk from this situation are:

-          Never open or download a file from an unsolicited email
-          Keep your operating system updated
-          Use a reputable antivirus program
-          Enable two-factor authentication whenever available
-          Confirm the authenticity of a Web site prior to entering login credentials

By: Amanda Grady

Monday, June 6, 2011

I Don't Use AV Because I Have a Mac

It seems there is no let up in the recent spate of Mac malware. A few days ago, another group of domains were registered and are being used to support a fake antivirus campaign that not only targets Mac, but also Windows users.

A series of sites were all registered by a Lee Juango who gives an address in "Pekin". However, the Web sites are hosted in Romania. The interesting thing is that these sites look almost exactly the same, with slight text changes depending on if the target is a Mac or a PC.






On the Mac domains, you will get a file called "macprotector.zip" (MacProtector). On the page for Windows, you get a file named “install.exe” (detected as Trojan.Gen/Trojan.FakeAV!gen39). This is actually a copy of SystemTool.

Another thing to note about this campaign is that the people behind it are getting really lazy. The site says the name of the Windows version of the fake antivirus product as Essential Cleaner, but when you install it, you can easily see that it is in fact a repacked version of SystemTool. I don't know about you, but I'm thinking that at least they could have reskinned SystemTool so that it says "Essential Cleaner" after you install it.




There was some talk in the media and on blogs about the idea that the people behind Windows fake antivirus are also behind the recent spate of Mac-targeted fake antivirus. This suggests that these people may indeed be branching out. Now that they have made the move to the Mac world, they are unlikely to leave it anytime soon.

By: Hon Lau

Friday, June 3, 2011

What Makes Big Data Storage Different?

At EMC World, I was fortunate enough to facilitate our first-ever Big Data Storage Summit.  Imagine a room with 20 or so people, each facing their own unique flavor of stupendous storage requirements.
Our working premise going in was that big data storage requirements were fundamentally different than the more familiar enterprise requirements.  Not only the technology, but also aspects of the operational environment, funding model and other contextual aspects.
We were directionally correct, but we fortunately ended up getting surprised in several regards.  That's what these sessions are really all about -- opening yourself up to being repeatedly surprised.

Not All Big Data Is Analytics Toss around the phrase "big data", and many people will immediately gravitate to the uber-data-warehouse-on-steroids mental picture.  That's fascinating enough in its own right, but there's another side to big data that is more about dealing with big files vs. big databases.

The analytics side was well explored during EMC World's first-ever Data Scientist Summit.  And the non-analytics side was the topic of the Big Data Storage Summit.  Think medical research, energy, video, repositories, satellite imagery, service providers -- anytime you're the proud owner of petabyte-class file systems coupled with alarming growth rates.

How Big Is Big?
Most people tend to focus on the absolute size of these environments.  While the total capacity numbers are certainly impressive, what's more interesting are the explosive growth rates, and that's where we started to focus.

When asked "how fast are you growing?" the responses ranged from "dozens of terabytes per month" to "dozens of terabytes per week".  A few were in the "terabytes per day" growth club.  Digging a little further, it wasn't hard to make the case that -- in some environments -- the growth rate itself was accelerating, leading to exponential growth on top of existing massive repositories.

Indeed, an interesting subset of the room created a picture of infinite storage demand; one where capacities were more dictated by limitations of resources and technology vs. simply keeping up with demand.  As the storage operational environments improved, they immediately tended to balloon to the next order-of-magnitude.
Yikes!


The Haves and The Have-Nots
We mixed the room up with two types of storage users: those that were meeting the challenge using purpose-built scale-out NAS (e.g. Isilon) vs. those that were attempting to use more traditional NAS platforms (e.g. EMC Celerra and VNX, NetApp, BlueArc, et. al.).  We wanted to understand if there was a meaningful and significant advantage between using purpose-built storage products vs. more traditional NAS offerings.

The differences couldn't have been more pronounced.  


Although it's considered exceedingly bad form to turn these research events into a blatant product pitch, at several points the Isilon customers were openly sharing how much better their worlds had become once they moved off of more traditional NAS products.

Gone was the endless treadmill of endlessly rebalancing storage and workloads across multiple filers.  Gone was lengthy and repetitive installation, configuration and integration exercises.  Gone was detecting and responding to individual performance spikes.


This wasn't glossy marketing-speak; these were real live IT administrators who now couldn't imagine any other way to get things done.  The people using a purpose-built scale-out approach (e.g. Isilon) had other challenges they were facing, but they were of a different class entirely than those using traditional NAS filers.

Surprising to me was the discussion around downtime -- I had sort of assumed that downtime or performance degradation wasn't particularly a huge issue in these environments.  I was very wrong.
As part of the endless rebalancing that the more traditional NAS users faced, they had to often had to take frequent and lengthy downtime to shuffle hundreds of terabytes around.  Cranky and irritated users appeared to be the norm here, not to mention cranky and irritated IT administrators.

One customer shared how a relatively normal filer disk failure and subsequent lengthy rebuild put a smoking performance hole in the middle of a dozen-filer farm -- because the user data sets spanned
multiple filers!  As a result, every user was significantly impacted; and of course the issue rose to very high levels indeed.

Yikes again.


Big Data Storage = Internal Storage Service Provider?
About an hour or so into the session, it became clear to me that we would end up focusing more on the people who were already using purpose-built scale-out NAS.  The folks who weren't were mostly so consumed in the day-to-day firefighting that it was more difficult for them to articulate requirements beyond their current situation.

I then started to probe on the folks who were using purpose-built products.  We wanted to know more about their operational model (how they're organized to do what they do), and the associated funding models.

Before long, it was clear to me that their operational models had edged over to look very much like an internal storage service provider: here are my service offerings, here is how I make them very easy to consume, here is how I give you visibility into what you're using, and how well it's performing.

And -- behind that -- the processes, roles, skills and organizational alignment that are the hallmarks of IT-as-a-service vs. traditional enterprise IT silos.


Not everyone in this subgroup was 100% there, but it started looking awfully familiar to me.  And, as a result, their concerns started sounding familiar as well.

For example, they all were pretty good at provisioning storage services on demand.  That being said, there was recognition that they were really providing infrastructure resources, hence the need to associate server, network, image, etc. with the fundamental provisioning activity.  I'd describe it as a desired Vblock-ish model, but with entirely different compute-to-capacity ratios.

There was also a desire to give their power users more visibility into the resources they were using, and how well they were performing.  Most of that information flows to the storage administrator today, vs.a federated view where "subdomain administrators" can get their specific context.

Notions of chargeback and metering came up frequently as well.  Some of these larger environments were well-funded and thus weren't overly concerned with showing resource usage in a precise and granular fashion.  Others were coming from government-funded research or educational settings; the need to justify each and every dollar spent was a pressing need for them.

Features and Functionality
We did some fishing to see if some of the more popular features found traditional NAS platforms had an equally desirably role in purpose-built scale-out environments.  And there were more than a few surprises here as well.

For example, when it came up to space reduction technologies (e.g. single-instancing, compressing and data deduplication), there wasn't the overwhelming demand from the purpose-built NAS crowd that you might have expected.  I think they weren't exactly clear if it would be worth the trouble in their environments, especially considering their data types and usage models usually aren't great candidates for these technologies.

Replication and data movement technologies were an area of growing interest.  Perhaps less so in a data protection sense, and more in a get-the-right-information-in-the-right-place-at-the-right-time information logistics sort of way.

Producers and consumers of these large information stores were increasingly separated by distance; and associated latency was no one's friend.


When we did finally wander into data protection topics (backup, continuous replication, etc.) there was a strange and rather awkward silence in the room.  No one came out and openly admitted it, but I was left with the suspicion that much of this big data isn't getting adequately protected for one reason or another.
When I asked "would anyone be interested in considering some newer approaches to this topic?", there was very strong interest.  Stay tuned here ...

Feature, Feature, Feature -- Hey, Wait A Minute!
As we went through a laundry list of other specific storage features (e.g. encryption, auto-tiering, hypervisor integration, etc.) the purpose-built crowd said something very important: we're willing to consider all these new features, but not at the expense of the utter simplicity and predictability we have in our existing environments.


Complexity -- in any form -- was the bane of their existence.  Better to have a less-functional solution that scaled and retained its core simplicity aspects vs. a more feature-rich environment that was even a tiny bit less elegant to use.  That came across loud and clear.


For me, this was one of the essential defining elements of what makes big data storage fundamentally different: simplicity and predictability above all else.  Take any seemingly minor inefficiency or iota of complexity, multiple it by a very large number, and you inherently have a major issue.

There's More, Of Course ...
We ended up with pages and pages of incredibly detailed notes from the session.  We learned a lot from this group.  And, in some cases, they learned a lot from each other :)

When I run one of these sessions, I sometimes feel a bit guilty that we're taking a lot without giving something back in return.  Time is valuable, and having these people come all the way out to EMC World so we can ask hit-and-miss questions about their world -- well, that's a huge ask from a vendor to a customer.

That being said, when I asked them if they would want to repeat this sort of session in the future, just about everyone raised their hands.  


I think that's because -- when it comes to big data storage -- it's a time for intense dialogue between both sides of the vendor/customer community.  Beware of vendors bearing "total solutions" :)


Instead, I think there's a clear opportunity for vendors to partner with these fascinating big data storage users, and build unique capabilities that help them do what they do even better than today.

A huge thank you to all of you who participated!


By: Chuck Hollis