Perspective on NSS Labs – Palo Alto Networks controversy

I am posting the Comment I wrote on the Palo Alto Networks site in response to Lee Klarich’s post which itself was in response to NSS Labs 2014 report on Next Generation Firewalls.

I have two points to make about the Palo Alto Networks – NSS Labs controversy. One, the NSS Labs Next Generation Firewall Comparative Analysis simply does not pass the smell test. Two, it’s not even clear to me that all of the firewalls tested are actually Next Generation Firewalls.

Regarding my first point, I am a Principal at Cymbel, a Palo Alto Networks reseller since 2007. We work with some of the largest organizations in the United States who have put Palo Alto Networks firewalls through extremely rigorous evaluations for extended periods, and have then deployed Palo Alto firewalls for many years. NSS Labs seems to be saying that all of the people in these organizations are idiots. This does not make sense to me.

In addition, NSS Labs seems to be saying that the Gartner people, who speak with far more firewall customers than we do, and place Palo Alto Networks in the Leader Quadrant and furthest to the right, are also morons. I’m not buying it.

Regarding my second point, at a more basic level, what is NSS Labs’ definition of a Next Generation Firewall? Since I am not a paying customer of NSS Labs, I don’t know. Let me start with the definition of a firewall – the ability to establish a Positive Control Model. In other words, define what network traffic is allowed, and block everything else, i.e. default deny.

In the 1990’s, this was relatively easy because all applications ran on well-defined port numbers. Therefore you could define policies based on port numbers, IP addresses, and protocols to be assured that you had full network visibility and control.

Starting in the early 2000s, this well-behaved order began to break down. Applications were built to share already open ports in order to bypass traditional stateful inspection firewalls. By the mid-2000s, there were hundreds, if not thousands, of applications that share ports, automatically hop from port to port, and use encryption to evade traditional firewalls. Thus, these traditional firewalls were essentially rendered useless, and could no longer support a Positive Control Model.

So a new type of firewall was needed. In order to re-establish a positive control model, this new type of firewall has to monitor all 65,535 TCP and UDP ports for all applications, all of the time. In other words, a firewall that enables you to define which applications are allowed, regardless of the ports on which they run, and block all of the others, known or unknown.

Furthermore, a Next Generation Firewall must enable you to lock a specifically allowed application to specifically allowed port(s), and prevent any other application from running on the port(s) opened for that specific application.

Palo Alto Networks, in 2007, was the first company to ship this new type of firewall that, in 2009, Gartner called a “Next Generation Firewall.” Since then, virtually every firewall vendor in the industry now uses the term. But in reality, which ones actually meet the real definition of a Next Generation Firewall?

I would recommend that NSS Labs release the details of its testing methodology for all to review. By keeping their testing methodology behind a paywall, they are simply feeding into Palo Alto’s “pay to play” contention.

Detecting unknown malware using sandboxing or anomaly detection

It’s been clear for several years that signature-based anti-virus and Intrusion Prevention / Detection controls are not sufficient to detect modern, fast-changing malware. Sandboxing as become a popular (rightfully so) complementary control to detect “unknown” malware, i.e. malware for which no signature exists yet. The concept is straightforward. Analyze inbound suspicious files by allowing them to run in a virtual machine environment. While sandboxing has been successful, I believe it’s worthwhile to understand its limitations. Here they are:

  • Access to the malware in motion, i.e. on the network, is not always available.
  • Most sandboxing solutions are limited to Windows
  • Malware authors have developed techniques to discover virtualized or testing environments
  • Newer malware communication techniques use random, one-time domains and non-HTTP protocols
  • Sandboxing cannot confirm malware actually installed and infected the endpoint
  • Droppers, the first stage of multi-stage malware is often the only part that is analyzed

Please check out Damballa’s Webcast on the Shortfalls of Security Sandboxing for more details.

Let me reiterate, I am not saying that sandboxing is not valuable. It surely is. However, due to the limitations listed above, we recommend that it be complemented by a log-based anomaly detection control that’s analyzing one or more of the following: outbound DNS traffic, all outbound traffic through the firewall and proxy server, user connections to servers, for retailers – POS terminals connections to servers, application authentications and authorizations. In addition to different network traffic sources, there are also a variety of statistical approaches available including supervised and unsupervised machine learning algorithms.

So in order to substantially reduce the risk of a data breach from unknown malware, the issue is not sandboxing or anomaly detection, it’s sandboxing and anomaly detection.

This post has been cross-posted from www.riskpundit.com.

How Palo Alto Networks could have prevented the Target breach

Brian Krebs’ recent posts on the Target breach, A First Look at the Target Intrusion, Malware, and A Closer Look at the Target Malware, provide the most detailed and accurate analysis available.

The malware the attackers used captured complete credit card data contained on the mag stripe by “memory scraping.”

This type of malicious software uses a technique that parses data stored briefly in the memory banks of specific POS devices; in doing so, the malware captures the data stored on the card’s magnetic stripe in the instant after it has been swiped at the terminal and is still in the system’s memory. Armed with this information, thieves can create cloned copies of the cards and use them to shop in stores for high-priced merchandise. Earlier this month, U.S. Cert issued a detailed analysis of several common memory scraping malware variants.

Furthermore, no known antivirus software at the time could detect this malware.

The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.

The key point I want to discuss however, is that the attackers took control of an internal Target server and used it to collect and store the stolen credit card information from the POS terminals.

Somehow, the attackers were able to upload the malicious POS software to store point-of-sale machines, and then set up a control server within Target’s internal network that served as a central repository for data hoovered by all of the infected point-of-sale devices.

“The bad guys were logging in remotely to that [control server], and apparently had persistent access to it,” a source close to the investigation told KrebsOnSecurity. “They basically had to keep going in and manually collecting the dumps.”

First, obviously the POS terminals have to communicate with specific Target servers to complete and store transactions. Second, the communications between the POS terminals and the malware on the compromised server(s) could have been denied had there been policies defined and enforced to do so. Palo Alto Networks’ Next Generation Firewalls are ideal for this use case for the following two reasons:

  1. Palo Alto Networks enables you to include zone, IP address, port, user, protocol, application information, and more in a single policy.
  2. Palo Alto Networks firewalls monitor all ports for all protocols and applications, all of the time, to enforce these polices to establish a Positive Control Model (default deny or application traffic white listing).

You might very well ask, why couldn’t Router Access Control Lists be used? Or why not a traditional port-based, stateful inspection firewall? Because these types of network controls limit policy definition to ports, IP addresses, and protocols, which cannot enforce a Positive Control Model. They are simply not detailed enough to control traffic with a high degree of confidence. One or the other might have worked in the 1990s. But by the mid-2000s, network-based applications were regularly bypassing both of these types of controls.

Therefore, if Target had deployed Palo Alto Networks firewalls between the POS terminals and their servers with granular policies to control POS terminals’ communications by zone, port, and application, the malware on the POS terminals would never have been able to communicate with the server(s) the attackers compromised.

In addition, it’s possible that the POS terminals may never have become infected in the first place because the compromised server(s) the attackers initially compromised would not have been able to communicate with the POS terminals. Note, I am not assuming that the servers used to compromise the POS terminals were the same servers used to collect the credit card data that was breached.

Unfortunately, a control with the capabilities of Palo Alto Networks is not specified by the Payment Card Industry (PCI) Data Security Standard (DSS). Yes, “Requirement #1: Install and maintain a firewall configuration to protect cardholder data,” seems to cover the subject. However, you can fully meet these PCI DSS requirements with a port-based, stateful inspection firewall. But, as I said above, an attacker can easily bypass this 1990s type of network control. Retailers and e-Commerce sites need to go beyond PCI DSS to actually protect themselves. You need is Next Generation Firewall like Palo Alto Networks which enables you to define and enforce a Positive Control.

This post has been cross-posted from www.riskpundit.com.

Two views on FireEye’s Mandiant acquisition

There are two views on the significance of FireEye’s acquisition of Mandiant. One is the consensus typified by Arik Hesseldahl, Why FireEye is the Internet’s New Security Powerhouse. Arik sees the synergy of FireEye’s network-based appliances coupled with Mandiant’s endpoint agents.

Richard Stiennon as a different view, Will FireEye’s Acquistion Strategy Work? Richard believes that FireEye’s stock price is way overvalued compared to more established players like Check Point and Palo Alto Networks. While FireEye initially led the market with network-based “sandboxing” technology to detect unknown threats, most of the major security vendors have matched or even exceeded FireEye’s capabilities. IMHO, you should not even consider any network-based security manufacturer that doesn’t provide integrated sandboxing technology to detect unknown threats. Therefore the only way FireEye can meet Wall Street’s revenue expectations is via acquisition using their inflated stock.

The best strategy for a high-flying public company whose products do not have staying power is to embark on an acquisition spree that juices revenue. In those terms, trading overvalued stock for Mandiant, with estimated 2013 revenue of $150 million, will easily satisfy Wall Street’s demand for continued growth to sustain valuations. FireEye has already locked in 100% growth for 2014.

It will probably take a couple of years to determine who is correct.

 

 

Response to Stiennon’s attack on NIST Cybersecurity Framework

In late October, NIST issued its Preliminary Cybersecurity Framework based on President Obama’s Executive Order 13636, Improving Critical Infrastructure Cybersecurity.

The NIST Cybersecurity Framework is based on one of the most basic triads of information security – Prevention, Detection, Response. In other words, start by preventing as many threats as possible. But you also must recognize that 100% prevention is not possible, so you need to invest in Detection controls. And of course, there are going to be security incidents, therefore you must invest in Response.

The NIST Framework defines a “Core” that expands on this triad. It defines five basic “Functions” of cybersecurity – Identify, Protect, Detect, Respond, and Recover. Each Function is made up of related Categories and Subcategories.

Richard Stiennon, as always provocative, rales against the NIST Framework, calling it “fatally flawed,” because it’s “poisoned with Risk Management thinking.” He goes on to say:

The problem with frameworks in general is that they are so removed from actually defining what has to be done to solve a problem. The problem with critical infrastructure, which includes oil and gas pipelines, the power grid, and city utilities, is that they are poorly protected against network and computer attacks. Is publishing a turgid high-level framework going to address that problem? Will a nuclear power plant that perfectly adopts the framework be resilient to cyber attack? Are there explicit controls that can be tested to determine if the framework is in place? Sadly, no to all of the above.

He then says:

IT security Risk Management can be summarized briefly:

1. Identify Assets

2. Rank business value of each asset

3. Discover vulnerabilities

4. Reduce the risk to acceptable value by patching and deploying defenses around the most critical assets

He then summarizes the problems with this approach as follows:

1. It is impossible to identify all assets

2. It is impossible to rank the value of each asset

3. It is impossible to determine all vulnerabilities

4. Trying to combine three impossible tasks to manage risk is impossible

Mr. Stiennon’s solution is to focus on Threats.

How many ways has Stiennon gone wrong?

First,  if your Risk Management process is as Stiennon outlines, then your process needs to be updated. Risk Management is surely not just about identifying assets and patching vulnerabilities. Threats are a critical component of Risk Management. Furthermore, while the NIST Framework surely includes identifying assets and patching vulnerabilities, they are only two Subcategories within the rich Identify and Protect Functions. The whole Detect Function is focused on detecting threats!! Therefore Stiennon is completely off-base in his criticism. I wonder if he actually read the NIST document.

Second, all organizations perform Risk Management either implicitly or explicitly. No organization has enough money to implement every administrative and technical control that is available. And that surely goes for all of the controls recommended by the NIST Framework’s Categories and Subcategories. Even the organizations that want to fully commit the NIST Framework still will need to prioritize the order in which controls are implemented. Trade-offs have to be made. Is it better to make these trade-offs implicitly and unsystematically? Or is it better to have an explicit Risk Management process that can be improved over time?

I am surely not saying that we have reached the promised land of cybersecurity risk management, just as we have not in virtually any other field to which risk management is applied. There is a lot of research going on to improve risk management and decision theory. One example is the use of Prospect Theory.

Third, if IT security teams are to communicate successfully with senior management and Boards of Directors, explain to me how else to do it? IT security risks, which are technical in nature, have to be translated into business terms. That means, how will a threat impact the business. It has to be in terms of core business processes. Is Richard saying that an organization cannot and should not expect to identify the IT assets related to a specific business process? I think not.

When we in IT security look for a model to follow, I believe it should be akin to the role of lawyers’ participation in negotiating a business transaction. At some point, the lawyers have done all the negotiating they can. They then have to explain to the business executives responsible for the transaction the risks involved in accepting a particular paragraph or sentence in the contract. In other words, lawyers advise and business executives decide.

In the same way, it is up to IT security folks to explain a particular IT security risk in business terms to the business executive, who will then decide to accept the risk or reduce it by allocating funds to implement the proposed administrative or technical control. And of course meaningful metrics that can show the value of the requested control must be included in the communication process.

Given the importance of information technology to the success of any business, cybersecurity decisions must be elevated to the business level. Risk Management is the language of business executives. While cybersecurity risk management is clearly a young field, we surely cannot give up. We have to work to improve it. I believe the NIST Cybersecurity Framework is a big step in the right direction.

 

Detection Controls Beyond Signatures and Rules

Charles Kolodgy of IDC has a thoughtful post on SecurityCurrent entitled, Defending Against Custom Malware: The Rise of STAP.

STAP (Specialized Threat Analysis and Protection) technical controls are designed to complement, maybe in the future replace, traditional detection controls that require signatures and rules. STAP controls focus on threats/attacks that have not been seen before or that can morph very quickly and therefore are missed by signature-based controls.

Actors such as criminal organizations and nation states are interested in the long haul.  They create specialized malware, intended for a specific target or groups of targets, with the ultimate goal of becoming embedded in the target’s infrastructure.  These threats are nearly always new and never seen before.  This malware is targeted, polymorphic, and dynamic.  It can be delivered via Web page, spear-phishing email, or any other number of avenues.

Mr. Kolodgy breaks STAP controls into three categories:

  • Virtual sandboxing/emulation and behavioral analysis
  • Virtual containerization/isolation
  • Advanced system scanning

Based on Cymbel’s research, we would create fourth category for Advanced log analysis. There is considerable research and funded companies going beyond traditional rule- and statistical/threshold-based techniques. Many of these efforts are levering Hadoop and/or advanced Machine Learning algorithms.

The Secrets of Successful CIOs (and CISOs)

Rachel King, a reporter with the CIO Journal of the Wall St. Journal published an article last week entitled, The Secrets of Successful CIOs. She reports on a study performed by Accenture to determine the priorities of high-performing IT executives.

Ms. King highlights high performers’ top three business objectives compared to lower performing IT executives. Unfortunately, what’s missing from the article is how Accenture measured the performances of IT executives. Having said that, this chart is interesting:

Accenture Says Highest-Performing CIOs Focus on Customers, Business - The CIO Report - WSJ

One would naturally jump to the conclusion that high performing Chief Information Security Officers would need to orient themselves to these top priorities as well. But what if you work for one of the CIOs who is focused on cutting business operational costs, increasing workforce productivity, and automating core business processes?

DropSmack: Using Dropbox Maliciously

I found an interesting article on TechRepublic, “DropSmack: Using Dropbox to steal files and deliver malware.

Given that 50 million people are using DropBox, it surely looks like an inviting attack vector for cyber adversaries. Jacob Williams (@MalwareJake) seems to have developed malware, DropSmack, to embed in a Word file already synchronized by DropBox to infect an internal endpoint and provide Command & Control communications.

What technical control do you have in place that would detect and block DropSmack? A network security product would have to be able to decode application files such as Word, Excel, PowerPoint, PDF, and then detect the malware and/or anomalies embedded in the document.

Can you prevent DropBox from being used in your organization? Should you? What about other file sharing applications?

The Real Value of a Positive Control Model

During the last several years I’ve written a lot about the fact that Palo Alto Networks enables you to re-establish a network-based Positive Control Model from the network layer up through the application layer. But I never spent much time on why it’s important.

Today, I will reference a blog post by Jack Whitsitt, Avoiding Strategic Cyber Security Loss and the Unacceptable Offensive Advantage (Post 2/2), to help explain the value of implementing a Positive Control Model.

TL;DR: All information breaches result from human error. The human error rate per unit of information technology is fairly constant. However, because IT is always expanding (more applications and more functions per application), the actual number of human errors resulting in Vulnerabilities (used in the most general sense of the word) per time period is always increasing. Unfortunately, the information security team has limited resources (Defensive Capability) and cannot cope with the users’ ever increasing number of errors. This has created an ever growing “Offensive Advantage (Vulnerabilities – Defensive Capability).”  However, implementing a Positive Control Model to influence/control human behavior will reduce the number of user errors per time interval, which will reduce the Offensive Advantage to a manageable size.

On the network side Palo Alto Networks’ Next Generation Firewall monitors and controls traffic by user and application across all 65,535 TCP and UDP ports, all of the time, at specified speeds. Granular policies based on any combination of application, user, security zone, IP address, port, URL, and/or Threat Protection profiles are created with a single unified interface that enables the infosec team to respond quickly to new business requirements.

On the endpoint side, Trusteer provides a behavioral type of whitelisting that prevents device compromise and confidential data exfiltration. It requires little to no administrative configuration effort. Thousands of agents can be deployed in days. When implemented on already deployed Windows and Mac devices, Trusteer will detect compromised devices that traditional signature-based anti-virus products miss.

Let’s start with Jack’s basic truths about the relationships between technology, people’s behavior, and infosec resources. Cyber security is a problem that occurs over unbounded time. So it’s a rate problem driven by the ever increasing number of human errors per unit of time. While the number of human errors per unit of time per “unit of information technology” is steady, complexity, in the form of new applications and added functions to existing applications, is constantly increasing. Therefore the number of human errors per unit of time is constantly increasing.

Unfortunately, information security resources (technical and administrative controls) are limited. Therefore the organization’s Defense Capability cannot keep up with the increasing number of Vulnerabilities. Since the number of human errors increases at a faster rate than limited resource Defense Capacity, an Unacceptable Offensive Advantage is created. Here is a diagram that shows this.

offensiveadvantage1

What’s even worse, most Defensive controls cannot significantly shrink the gap between the Vulnerability curve and the Defense curve because they do not bend the vulnerability curve, as this graph shows.

offensiveadvantage2

So the only real hope of reducing organizational cyber security risk, i.e. the adversaries’ Offensive Advantage is to bend the Vulnerability curve as this graph shows.

offensiveadvantage3

Once you do that, you can apply additional controls to further shrink the gap between Vulnerability and Defense curves as this graph shows.

offensiveadvantage4

The question is how to do this. Perhaps Security Awareness Training can have some impact.

I recommend implementing network and host-based technical controls that can establish a Positive Control Model. In other words, only by defining what people are allowed to do and denying everything else can you actually bend the Vulnerability curve, i.e. reduce human errors, both unintentional and intentional.

Implementing a Positive Control Model does not happen instantly, i.e. it’s also is a rate problem. But if you don’t have the technical controls in place, no amount of process is going to improve the organization’s security posture.

This is why firewalls are such a critical network technical control. They are placed at critical choke points in the network, between subnets of different trust levels, with the express purpose of implementing a Positive Control Model.

Firewalls first became popular in the mid 1990s. At that time, when a new application was built, it was assigned a port number. For example, the mail protocol, SMTP was assigned port 25, and the HTTP protocol was assigned to port 80. At that time, (1) protocol and application meant the same thing, and (2) all applications “behaved,” i.e. they ran only on their assigned ports. Given this environment, all a firewall had to do was use the port numbers (and IP addresses) to control traffic. Hence the popularity of port-based stateful inspection firewalls.

Unfortunately, starting in the early 2000s, developers began writing applications to bypass the port-based stateful inspection firewall in order to get their applications deployed quickly in organizations without waiting for the security teams to make changes in policies. Also different applications were developed that could share a port like port 80 because it was always open to give people access to the Internet. Other techniques like port-hopping and encryption were used to bypass the port-based, stateful inspection firewall.

Security teams started deploying additional network security controls like URL Filtering to complement firewalls. This increase in complexity created new problems such as (1) policy coordination between URL Filtering and the firewalls, (2) performance issues, and (3) since URL Filtering products were mostly proxy based, they would break some of the newer applications frustrating users trying to do their jobs.

By 2005 it was obvious to some people that the application technology had obsoleted port-based firewalls and their helpers. A completely new approach to firewall architecture was needed that (1)  classified traffic by application first regardless of port, and (2) was backwardly compatible with port-based firewalls to enable the conversion process. This is exactly what the Palo Alto Networks team did, releasing their first “Next Generation” Firewall in 2007.

Palo Alto Networks classifies traffic at the beginning of the policy process by application. It monitors all 65,535 TCP and UDP for all applications, all of the time, at specified speeds. This enables organizations to re-establish the Positive Control Model which bends the “Vulnerability” curve and allows an infosec team with limited resources to reduce, what Jack Whitsitt calls, the adversaries’ “Offensive Advantage.”

On the endpoint side, Trusteer provides a type of Positive Control Model / whitelisting whereby highly targeted applications like browsers, Java, Adobe Flash, PDF, and Microsoft Office applications are automatically protected behaviorally. The Trusteer agent understands the memory state – file I/O relationship to the degree that it knows the difference between good I/O and malicious I/O behavior. Trusteer then blocks the malicious I/O before any damage can be done.

Thus human errors resulting from social engineering such as clicking on links to malicious web pages or opening documents containing malicious code are automatically blocked. This is all done with no policy configuration efforts on the part of the infosec team. The policies are updated by Trusteer periodically. There are no policies to configure. Furthermore, thousands of agents can be deployed in days. Finally, when implemented to deployed Windows and Mac endpoints, it will detect already compromised devices.

Trusteer, founded in 2006, has over 40 million agents deployed across the banking industry to protect online banking users. So their agent technology has been battle tested.

In closing then, only by implementing technical controls which establish a Positive Control Model to reduce human errors, can an organization bend the Vulnerability Curve sufficiently to reduce the adversaries’ Offensive Advantage to an acceptable level.

Practical Zero Trust Recommendations

Cymbel Zero Trust Recommendations with gray bottom1

Cymbel has adopted Forrester’s Zero Trust Model for Information Security. Zero Trust means there are no longer “trusted” networks, devices, or users. There is no such thing as 100% Prevention, if there ever was. In light of the changes we’ve seen during the last several years, this is the only approach that makes sense. There is simply no way to prevent end points and servers from becoming compromised 100% of the time. For more details see Cymbel’s Zero Trust Recommendations.

Links to Explore