Why phishing attacks are nastier than ever

Roger Grimes explains “Why phishing attacks are nastier than ever.”  Here are they key reasons why many infosec professionals consider spearphishing emails the number one attack vector:

  • The attack is handcrafted by professional criminals
  • The attack appears to be sent by someone you know
  • The attack includes a project you are working on
  • Your attacker has been monitoring your company’s email
  • Your attacker can intercept and change emails as needed
  • Your attacker uses custom or built-in tools to subvert antivirus software
  • Your attacker uses military-grade encryption to tunnel your data home
  • Your attacker covers their (sic) tracks
  • Your attacker has been in your environment for years
  • Your attacker is not afraid of being caught

Roger does have some good administrative process recommendations. I would also recommend a couple of advanced technical controls.

The evolution of SIEM

In the last several years, a new “category” of log analytics for security has arisen called “User Behavior Analytics.” From my 13-year perspective, UBA is really the evolution of SIEM.

The term “Security Information and Event Management (SIEM)” was defined by Gartner 10 years ago. At the time, some people were arguing between Security Information Management (SIM) and Security Event Management (SEM). Gartner just combined the two and ended that debate.

The focus of SIEM was on consolidating and analyzing log information from disparate sources such as firewalls, intrusion detection systems, operating systems, etc. in order to meet compliance requirements, detect security incidents, and provide forensics.

At the time, the correlation was designed mostly around IP addresses, although some systems could correlate using ports and protocols, and even users. All log sources were in the datacenter. And most correlation was rule-based, although there was some statistical analysis done as early as 2003. Finally, most SIEMs used relational databases to store the logs.

Starting in the late 2000s, organizations began to realize that while they were meeting compliance requirements, they were still being breached due to the limitations of “traditional” SIEM solutions’ incident detection capabilities as follows:

  • They were designed to focus on IP addresses rather than users. At present, correlating by IP addresses is useless given the increasing number of remote and mobile users, and the number of times a day those users’ IP addresses can change. Retrofitting the traditional SIEM for user analysis has shown to be difficult.
  •  They are notoriously difficult to administer. This is due mostly to the rule-based method of event correlation. Customizing and keeping up-to-date hundreds of rules is time consuming. Too often organizations did not realize this when they purchased the SIEM and therefore under-budgeted resources to administer it.
  • They tend to generate too many false positives. This is also mostly due to rule-based event correlation. This is particularly insidious as analysts start to ignore alerts because investigating most of them turns out to be a waste of time. This also affects morale resulting in high turnover.
  • They miss true positives because either the generated alerts are simply missed by analysts overwhelmed by too many alerts, or there was no rule built to detect the attacker’s activity. The rule-building cycle is usually backward looking. In other words, an incident happens and then rules are built to detect that situation should it happen again. Since attackers are constantly innovating, the rule building process is a losing proposition.
  • They tend to have sluggish performance in part due to organizations underestimating, and therefore under-budgeting, infrastructure requirements, and due to the limitations of relational databases.

In the last few years, we have seen a new security log analysis “category” defined as “User Behavior Analytics (UBA), which focuses on analyzing user credentials and user oriented event data. The data stores are almost never relational, and the algorithms are mostly machine learning which are predictive in nature and require much less tuning.

Notice how UBA solutions address most of the shortcomings of traditional SIEMs for incident detection. So the question is why is UBA considered a separate category? It seems to me that UBA is the evolution of SIEM – better user interfaces (in some cases), better algorithms, better log storage systems, and a more appropriate “entity” on which to focus, i.e. users. In addition, UBAs can support user data coming from SaaS as well as on-premise applications and controls.

I understand that some UBA vendors’ short-term, go-to-market strategy is to complement the installed SIEM. It seems to me this is the justification for considering UBA and SIEM as separate product categories. But my question is, how many organizations are going to be willing to use two or three different products to analyze logs?

In my view, in 3-5 years there won’t be a separate UBA market. The traditional SIEM vendors are already attempting to add UBA capabilities with varying degrees of success. We are also beginning to see SIEM vendors acquire UBA vendors. We’ll see how successful the integration process will be. A couple of UBA vendors will prosper/survive as SIEM vendors due to a combination of superior user interface, more efficacious analytics, faster and more scalable storage, and lower administrative costs.

What is a ‘sophisticated’ cyberattack?

Ira Winkler and Ari Treu Gomes have defined eight rules to help classify cyberattacks. They call them “Irari” rules, a contraction of their first names. Furthermore, each rule is actually a recommendation for improving enterprises’ security defenses.

I agree that the victims of cyberattacks too often classify their breaches to which they were subject as “sophisticated” when they were anything but. On the other hand, Ira and Ari have gone too far for the following reasons:

  1. No organization I am aware of has the resources to fully support all eight recommendations. So how do you prioritize? Risk management you say?
  2. The technology simply does not yet exist to successfully implement some of the recommendations.

There is good news though. During the last few years, largely due to the success of companies like Palo Alto Networks and FireEye, there has been a tremendous surge in well-funded innovative technical security controls that make many of the Irari recommendations feasible. By innovative, I mean (1) security efficacious, (2) enable process improvement, (3) low risk of negatively impacting business processes.

Here are the eight Irari rules and my comments:

The malware used should have been detected. Keeping your anti-virus up-to-date seems reasonable. However, you should not be too satisfied because signature-based anti-virus is a very low bar. In a variation on HD Moore’s Law, any attacker can buy software to modify her malware to bypass anti-virus products. I recommend starting the process of adding a non-signature based endpoint prevention solution and replacing “paid-for” A/V with Microsoft’s free tools.

The attack exploited vulnerabilities where patches were available. This is a tough one. First, is it really possible to patch every vulnerability? Second, if you are not going to, how do you prioritize? CVSS has some well-understood weaknesses. There are better ways to prioritize the risks of vulnerabilities.

Multifactor authentication was not in use on critical servers. This makes sense. However, the cost of managing certificates is, too often, not considered.

Static passwords were used in attacks on critical servers. While the concept of changing passwords frequently sounds good, too often the human costs measured in time consumed changing passwords are not considered. An automated password changer would be interesting.

If phishing was involved, there was no awareness program in place that went beyond phishing simulations and computer-based training. Phishing is a primary attack vector. The issue is how effective is your security awareness program? Moreover, how well can you monitor its effectiveness? Note here that Ira Winkler’s company, Secure Mentem, provides security awareness programs.

There was poor network segmentation that allowed the attackers to jump from low-value networks to critical systems. There is no doubt that segmentation is of critical importance. It’s well understood, as the Irari authors point out, that better segmentation in a couple of areas would have prevented the credit card exfiltration of the Target breach. However, until very recently, the complexity and implementation costs of datacenter segmentation put it out of reach for most organizations.

User accounts that were compromised had excessive privileges. Another excellent recommendation that, until very recently, was extremely difficult to prevent or detect. Users need administrative privileges for a variety of reasons. But there are now security agents that prevent unneeded activities despite users having administrative privileges. There are also User Behavior Analytics tools that are easy to administer and operate that will highlight users whose application access rights are greater than their peers.

Next Generation Firewall Best Practices

Cymbel has been providing consulting services related to next generation firewalls since 2007. Based on our experience, we are often asked about “best practices.” Given the variety of deployment scenarios and different priorities of our clients, we have found it difficult to develop a general set of best practices. However, I recently observed that Palo Alto Networks has been adding best practices information to its Administration Guides.

So I thought it might be useful to pull together Palo Alto Networks’ best practices into a single document, which we have done. If you are interested in receiving this document, please let me know by filling out the form on this page, commenting below, or contacting us at 617-581-6633.

Perspective on NSS Labs – Palo Alto Networks controversy

I am posting the Comment I wrote on the Palo Alto Networks site in response to Lee Klarich’s post which itself was in response to NSS Labs 2014 report on Next Generation Firewalls.

I have two points to make about the Palo Alto Networks – NSS Labs controversy. One, the NSS Labs Next Generation Firewall Comparative Analysis simply does not pass the smell test. Two, it’s not even clear to me that all of the firewalls tested are actually Next Generation Firewalls.

Regarding my first point, I am a Principal at Cymbel, a Palo Alto Networks reseller since 2007. We work with some of the largest organizations in the United States who have put Palo Alto Networks firewalls through extremely rigorous evaluations for extended periods, and have then deployed Palo Alto firewalls for many years. NSS Labs seems to be saying that all of the people in these organizations are idiots. This does not make sense to me.

In addition, NSS Labs seems to be saying that the Gartner people, who speak with far more firewall customers than we do, and place Palo Alto Networks in the Leader Quadrant and furthest to the right, are also morons. I’m not buying it.

Regarding my second point, at a more basic level, what is NSS Labs’ definition of a Next Generation Firewall? Since I am not a paying customer of NSS Labs, I don’t know. Let me start with the definition of a firewall – the ability to establish a Positive Control Model. In other words, define what network traffic is allowed, and block everything else, i.e. default deny.

In the 1990’s, this was relatively easy because all applications ran on well-defined port numbers. Therefore you could define policies based on port numbers, IP addresses, and protocols to be assured that you had full network visibility and control.

Starting in the early 2000s, this well-behaved order began to break down. Applications were built to share already open ports in order to bypass traditional stateful inspection firewalls. By the mid-2000s, there were hundreds, if not thousands, of applications that share ports, automatically hop from port to port, and use encryption to evade traditional firewalls. Thus, these traditional firewalls were essentially rendered useless, and could no longer support a Positive Control Model.

So a new type of firewall was needed. In order to re-establish a positive control model, this new type of firewall has to monitor all 65,535 TCP and UDP ports for all applications, all of the time. In other words, a firewall that enables you to define which applications are allowed, regardless of the ports on which they run, and block all of the others, known or unknown.

Furthermore, a Next Generation Firewall must enable you to lock a specifically allowed application to specifically allowed port(s), and prevent any other application from running on the port(s) opened for that specific application.

Palo Alto Networks, in 2007, was the first company to ship this new type of firewall that, in 2009, Gartner called a “Next Generation Firewall.” Since then, virtually every firewall vendor in the industry now uses the term. But in reality, which ones actually meet the real definition of a Next Generation Firewall?

I would recommend that NSS Labs release the details of its testing methodology for all to review. By keeping their testing methodology behind a paywall, they are simply feeding into Palo Alto’s “pay to play” contention.

Detecting unknown malware using sandboxing or anomaly detection

It’s been clear for several years that signature-based anti-virus and Intrusion Prevention / Detection controls are not sufficient to detect modern, fast-changing malware. Sandboxing as become a popular (rightfully so) complementary control to detect “unknown” malware, i.e. malware for which no signature exists yet. The concept is straightforward. Analyze inbound suspicious files by allowing them to run in a virtual machine environment. While sandboxing has been successful, I believe it’s worthwhile to understand its limitations. Here they are:

  • Access to the malware in motion, i.e. on the network, is not always available.
  • Most sandboxing solutions are limited to Windows
  • Malware authors have developed techniques to discover virtualized or testing environments
  • Newer malware communication techniques use random, one-time domains and non-HTTP protocols
  • Sandboxing cannot confirm malware actually installed and infected the endpoint
  • Droppers, the first stage of multi-stage malware is often the only part that is analyzed

Please check out Damballa’s Webcast on the Shortfalls of Security Sandboxing for more details.

Let me reiterate, I am not saying that sandboxing is not valuable. It surely is. However, due to the limitations listed above, we recommend that it be complemented by a log-based anomaly detection control that’s analyzing one or more of the following: outbound DNS traffic, all outbound traffic through the firewall and proxy server, user connections to servers, for retailers – POS terminals connections to servers, application authentications and authorizations. In addition to different network traffic sources, there are also a variety of statistical approaches available including supervised and unsupervised machine learning algorithms.

So in order to substantially reduce the risk of a data breach from unknown malware, the issue is not sandboxing or anomaly detection, it’s sandboxing and anomaly detection.

This post has been cross-posted from www.riskpundit.com.

How Palo Alto Networks could have prevented the Target breach

Brian Krebs’ recent posts on the Target breach, A First Look at the Target Intrusion, Malware, and A Closer Look at the Target Malware, provide the most detailed and accurate analysis available.

The malware the attackers used captured complete credit card data contained on the mag stripe by “memory scraping.”

This type of malicious software uses a technique that parses data stored briefly in the memory banks of specific POS devices; in doing so, the malware captures the data stored on the card’s magnetic stripe in the instant after it has been swiped at the terminal and is still in the system’s memory. Armed with this information, thieves can create cloned copies of the cards and use them to shop in stores for high-priced merchandise. Earlier this month, U.S. Cert issued a detailed analysis of several common memory scraping malware variants.

Furthermore, no known antivirus software at the time could detect this malware.

The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.

The key point I want to discuss however, is that the attackers took control of an internal Target server and used it to collect and store the stolen credit card information from the POS terminals.

Somehow, the attackers were able to upload the malicious POS software to store point-of-sale machines, and then set up a control server within Target’s internal network that served as a central repository for data hoovered by all of the infected point-of-sale devices.

“The bad guys were logging in remotely to that [control server], and apparently had persistent access to it,” a source close to the investigation told KrebsOnSecurity. “They basically had to keep going in and manually collecting the dumps.”

First, obviously the POS terminals have to communicate with specific Target servers to complete and store transactions. Second, the communications between the POS terminals and the malware on the compromised server(s) could have been denied had there been policies defined and enforced to do so. Palo Alto Networks’ Next Generation Firewalls are ideal for this use case for the following two reasons:

  1. Palo Alto Networks enables you to include zone, IP address, port, user, protocol, application information, and more in a single policy.
  2. Palo Alto Networks firewalls monitor all ports for all protocols and applications, all of the time, to enforce these polices to establish a Positive Control Model (default deny or application traffic white listing).

You might very well ask, why couldn’t Router Access Control Lists be used? Or why not a traditional port-based, stateful inspection firewall? Because these types of network controls limit policy definition to ports, IP addresses, and protocols, which cannot enforce a Positive Control Model. They are simply not detailed enough to control traffic with a high degree of confidence. One or the other might have worked in the 1990s. But by the mid-2000s, network-based applications were regularly bypassing both of these types of controls.

Therefore, if Target had deployed Palo Alto Networks firewalls between the POS terminals and their servers with granular policies to control POS terminals’ communications by zone, port, and application, the malware on the POS terminals would never have been able to communicate with the server(s) the attackers compromised.

In addition, it’s possible that the POS terminals may never have become infected in the first place because the compromised server(s) the attackers initially compromised would not have been able to communicate with the POS terminals. Note, I am not assuming that the servers used to compromise the POS terminals were the same servers used to collect the credit card data that was breached.

Unfortunately, a control with the capabilities of Palo Alto Networks is not specified by the Payment Card Industry (PCI) Data Security Standard (DSS). Yes, “Requirement #1: Install and maintain a firewall configuration to protect cardholder data,” seems to cover the subject. However, you can fully meet these PCI DSS requirements with a port-based, stateful inspection firewall. But, as I said above, an attacker can easily bypass this 1990s type of network control. Retailers and e-Commerce sites need to go beyond PCI DSS to actually protect themselves. You need is Next Generation Firewall like Palo Alto Networks which enables you to define and enforce a Positive Control.

This post has been cross-posted from www.riskpundit.com.

Two views on FireEye’s Mandiant acquisition

There are two views on the significance of FireEye’s acquisition of Mandiant. One is the consensus typified by Arik Hesseldahl, Why FireEye is the Internet’s New Security Powerhouse. Arik sees the synergy of FireEye’s network-based appliances coupled with Mandiant’s endpoint agents.

Richard Stiennon as a different view, Will FireEye’s Acquistion Strategy Work? Richard believes that FireEye’s stock price is way overvalued compared to more established players like Check Point and Palo Alto Networks. While FireEye initially led the market with network-based “sandboxing” technology to detect unknown threats, most of the major security vendors have matched or even exceeded FireEye’s capabilities. IMHO, you should not even consider any network-based security manufacturer that doesn’t provide integrated sandboxing technology to detect unknown threats. Therefore the only way FireEye can meet Wall Street’s revenue expectations is via acquisition using their inflated stock.

The best strategy for a high-flying public company whose products do not have staying power is to embark on an acquisition spree that juices revenue. In those terms, trading overvalued stock for Mandiant, with estimated 2013 revenue of $150 million, will easily satisfy Wall Street’s demand for continued growth to sustain valuations. FireEye has already locked in 100% growth for 2014.

It will probably take a couple of years to determine who is correct.



Response to Stiennon’s attack on NIST Cybersecurity Framework

In late October, NIST issued its Preliminary Cybersecurity Framework based on President Obama’s Executive Order 13636, Improving Critical Infrastructure Cybersecurity.

The NIST Cybersecurity Framework is based on one of the most basic triads of information security – Prevention, Detection, Response. In other words, start by preventing as many threats as possible. But you also must recognize that 100% prevention is not possible, so you need to invest in Detection controls. And of course, there are going to be security incidents, therefore you must invest in Response.

The NIST Framework defines a “Core” that expands on this triad. It defines five basic “Functions” of cybersecurity – Identify, Protect, Detect, Respond, and Recover. Each Function is made up of related Categories and Subcategories.

Richard Stiennon, as always provocative, rales against the NIST Framework, calling it “fatally flawed,” because it’s “poisoned with Risk Management thinking.” He goes on to say:

The problem with frameworks in general is that they are so removed from actually defining what has to be done to solve a problem. The problem with critical infrastructure, which includes oil and gas pipelines, the power grid, and city utilities, is that they are poorly protected against network and computer attacks. Is publishing a turgid high-level framework going to address that problem? Will a nuclear power plant that perfectly adopts the framework be resilient to cyber attack? Are there explicit controls that can be tested to determine if the framework is in place? Sadly, no to all of the above.

He then says:

IT security Risk Management can be summarized briefly:

1. Identify Assets

2. Rank business value of each asset

3. Discover vulnerabilities

4. Reduce the risk to acceptable value by patching and deploying defenses around the most critical assets

He then summarizes the problems with this approach as follows:

1. It is impossible to identify all assets

2. It is impossible to rank the value of each asset

3. It is impossible to determine all vulnerabilities

4. Trying to combine three impossible tasks to manage risk is impossible

Mr. Stiennon’s solution is to focus on Threats.

How many ways has Stiennon gone wrong?

First,  if your Risk Management process is as Stiennon outlines, then your process needs to be updated. Risk Management is surely not just about identifying assets and patching vulnerabilities. Threats are a critical component of Risk Management. Furthermore, while the NIST Framework surely includes identifying assets and patching vulnerabilities, they are only two Subcategories within the rich Identify and Protect Functions. The whole Detect Function is focused on detecting threats!! Therefore Stiennon is completely off-base in his criticism. I wonder if he actually read the NIST document.

Second, all organizations perform Risk Management either implicitly or explicitly. No organization has enough money to implement every administrative and technical control that is available. And that surely goes for all of the controls recommended by the NIST Framework’s Categories and Subcategories. Even the organizations that want to fully commit the NIST Framework still will need to prioritize the order in which controls are implemented. Trade-offs have to be made. Is it better to make these trade-offs implicitly and unsystematically? Or is it better to have an explicit Risk Management process that can be improved over time?

I am surely not saying that we have reached the promised land of cybersecurity risk management, just as we have not in virtually any other field to which risk management is applied. There is a lot of research going on to improve risk management and decision theory. One example is the use of Prospect Theory.

Third, if IT security teams are to communicate successfully with senior management and Boards of Directors, explain to me how else to do it? IT security risks, which are technical in nature, have to be translated into business terms. That means, how will a threat impact the business. It has to be in terms of core business processes. Is Richard saying that an organization cannot and should not expect to identify the IT assets related to a specific business process? I think not.

When we in IT security look for a model to follow, I believe it should be akin to the role of lawyers’ participation in negotiating a business transaction. At some point, the lawyers have done all the negotiating they can. They then have to explain to the business executives responsible for the transaction the risks involved in accepting a particular paragraph or sentence in the contract. In other words, lawyers advise and business executives decide.

In the same way, it is up to IT security folks to explain a particular IT security risk in business terms to the business executive, who will then decide to accept the risk or reduce it by allocating funds to implement the proposed administrative or technical control. And of course meaningful metrics that can show the value of the requested control must be included in the communication process.

Given the importance of information technology to the success of any business, cybersecurity decisions must be elevated to the business level. Risk Management is the language of business executives. While cybersecurity risk management is clearly a young field, we surely cannot give up. We have to work to improve it. I believe the NIST Cybersecurity Framework is a big step in the right direction.


Detection Controls Beyond Signatures and Rules

Charles Kolodgy of IDC has a thoughtful post on SecurityCurrent entitled, Defending Against Custom Malware: The Rise of STAP.

STAP (Specialized Threat Analysis and Protection) technical controls are designed to complement, maybe in the future replace, traditional detection controls that require signatures and rules. STAP controls focus on threats/attacks that have not been seen before or that can morph very quickly and therefore are missed by signature-based controls.

Actors such as criminal organizations and nation states are interested in the long haul.  They create specialized malware, intended for a specific target or groups of targets, with the ultimate goal of becoming embedded in the target’s infrastructure.  These threats are nearly always new and never seen before.  This malware is targeted, polymorphic, and dynamic.  It can be delivered via Web page, spear-phishing email, or any other number of avenues.

Mr. Kolodgy breaks STAP controls into three categories:

  • Virtual sandboxing/emulation and behavioral analysis
  • Virtual containerization/isolation
  • Advanced system scanning

Based on Cymbel’s research, we would create fourth category for Advanced log analysis. There is considerable research and funded companies going beyond traditional rule- and statistical/threshold-based techniques. Many of these efforts are levering Hadoop and/or advanced Machine Learning algorithms.