The evolution of SIEM

In the last several years, a new “category” of log analytics for security has arisen called “User Behavior Analytics.” From my 13-year perspective, UBA is really the evolution of SIEM.

The term “Security Information and Event Management (SIEM)” was defined by Gartner 10 years ago. At the time, some people were arguing between Security Information Management (SIM) and Security Event Management (SEM). Gartner just combined the two and ended that debate.

The focus of SIEM was on consolidating and analyzing log information from disparate sources such as firewalls, intrusion detection systems, operating systems, etc. in order to meet compliance requirements, detect security incidents, and provide forensics.

At the time, the correlation was designed mostly around IP addresses, although some systems could correlate using ports and protocols, and even users. All log sources were in the datacenter. And most correlation was rule-based, although there was some statistical analysis done as early as 2003. Finally, most SIEMs used relational databases to store the logs.

Starting in the late 2000s, organizations began to realize that while they were meeting compliance requirements, they were still being breached due to the limitations of “traditional” SIEM solutions’ incident detection capabilities as follows:

  • They were designed to focus on IP addresses rather than users. At present, correlating by IP addresses is useless given the increasing number of remote and mobile users, and the number of times a day those users’ IP addresses can change. Retrofitting the traditional SIEM for user analysis has shown to be difficult.
  •  They are notoriously difficult to administer. This is due mostly to the rule-based method of event correlation. Customizing and keeping up-to-date hundreds of rules is time consuming. Too often organizations did not realize this when they purchased the SIEM and therefore under-budgeted resources to administer it.
  • They tend to generate too many false positives. This is also mostly due to rule-based event correlation. This is particularly insidious as analysts start to ignore alerts because investigating most of them turns out to be a waste of time. This also affects morale resulting in high turnover.
  • They miss true positives because either the generated alerts are simply missed by analysts overwhelmed by too many alerts, or there was no rule built to detect the attacker’s activity. The rule-building cycle is usually backward looking. In other words, an incident happens and then rules are built to detect that situation should it happen again. Since attackers are constantly innovating, the rule building process is a losing proposition.
  • They tend to have sluggish performance in part due to organizations underestimating, and therefore under-budgeting, infrastructure requirements, and due to the limitations of relational databases.

In the last few years, we have seen a new security log analysis “category” defined as “User Behavior Analytics (UBA), which focuses on analyzing user credentials and user oriented event data. The data stores are almost never relational, and the algorithms are mostly machine learning which are predictive in nature and require much less tuning.

Notice how UBA solutions address most of the shortcomings of traditional SIEMs for incident detection. So the question is why is UBA considered a separate category? It seems to me that UBA is the evolution of SIEM – better user interfaces (in some cases), better algorithms, better log storage systems, and a more appropriate “entity” on which to focus, i.e. users. In addition, UBAs can support user data coming from SaaS as well as on-premise applications and controls.

I understand that some UBA vendors’ short-term, go-to-market strategy is to complement the installed SIEM. It seems to me this is the justification for considering UBA and SIEM as separate product categories. But my question is, how many organizations are going to be willing to use two or three different products to analyze logs?

In my view, in 3-5 years there won’t be a separate UBA market. The traditional SIEM vendors are already attempting to add UBA capabilities with varying degrees of success. We are also beginning to see SIEM vendors acquire UBA vendors. We’ll see how successful the integration process will be. A couple of UBA vendors will prosper/survive as SIEM vendors due to a combination of superior user interface, more efficacious analytics, faster and more scalable storage, and lower administrative costs.

What is a ‘sophisticated’ cyberattack?

Ira Winkler and Ari Treu Gomes have defined eight rules to help classify cyberattacks. They call them “Irari” rules, a contraction of their first names. Furthermore, each rule is actually a recommendation for improving enterprises’ security defenses.

I agree that the victims of cyberattacks too often classify their breaches to which they were subject as “sophisticated” when they were anything but. On the other hand, Ira and Ari have gone too far for the following reasons:

  1. No organization I am aware of has the resources to fully support all eight recommendations. So how do you prioritize? Risk management you say?
  2. The technology simply does not yet exist to successfully implement some of the recommendations.

There is good news though. During the last few years, largely due to the success of companies like Palo Alto Networks and FireEye, there has been a tremendous surge in well-funded innovative technical security controls that make many of the Irari recommendations feasible. By innovative, I mean (1) security efficacious, (2) enable process improvement, (3) low risk of negatively impacting business processes.

Here are the eight Irari rules and my comments:

The malware used should have been detected. Keeping your anti-virus up-to-date seems reasonable. However, you should not be too satisfied because signature-based anti-virus is a very low bar. In a variation on HD Moore’s Law, any attacker can buy software to modify her malware to bypass anti-virus products. I recommend starting the process of adding a non-signature based endpoint prevention solution and replacing “paid-for” A/V with Microsoft’s free tools.

The attack exploited vulnerabilities where patches were available. This is a tough one. First, is it really possible to patch every vulnerability? Second, if you are not going to, how do you prioritize? CVSS has some well-understood weaknesses. There are better ways to prioritize the risks of vulnerabilities.

Multifactor authentication was not in use on critical servers. This makes sense. However, the cost of managing certificates is, too often, not considered.

Static passwords were used in attacks on critical servers. While the concept of changing passwords frequently sounds good, too often the human costs measured in time consumed changing passwords are not considered. An automated password changer would be interesting.

If phishing was involved, there was no awareness program in place that went beyond phishing simulations and computer-based training. Phishing is a primary attack vector. The issue is how effective is your security awareness program? Moreover, how well can you monitor its effectiveness? Note here that Ira Winkler’s company, Secure Mentem, provides security awareness programs.

There was poor network segmentation that allowed the attackers to jump from low-value networks to critical systems. There is no doubt that segmentation is of critical importance. It’s well understood, as the Irari authors point out, that better segmentation in a couple of areas would have prevented the credit card exfiltration of the Target breach. However, until very recently, the complexity and implementation costs of datacenter segmentation put it out of reach for most organizations.

User accounts that were compromised had excessive privileges. Another excellent recommendation that, until very recently, was extremely difficult to prevent or detect. Users need administrative privileges for a variety of reasons. But there are now security agents that prevent unneeded activities despite users having administrative privileges. There are also User Behavior Analytics tools that are easy to administer and operate that will highlight users whose application access rights are greater than their peers.

Zero Trust on the Endpoint

Palo Alto Networks Traps Zero Trust WP Picture1

The Forrester Zero Trust Model (Zero Trust) of information security advocates a “never trust, always verify” philosophy in protecting information resources. Though the model has traditionally been applied to network communications, it is clear that today’s cyber threats warrant a new approach in which the Zero Trust model is extended to endpoints. Palo Alto Networks® Traps™ Advanced Endpoint Protection is an innovative endpoint protection technology that prevents exploits and malicious executables, both known and unknown. It has the proven capacity to act as the enforcer for Zero Trust and to serve as a vital component of an enterprise’s security architecture and compliance suite on the endpoint.

If you would like a copy of this whitepaper, please fill out the form on the right side of this page.

Links to Explore

Introducing Next-Generation Honeynets

Attivo whitepaper Picture1

Attivo Networks is introducing a next-generation, virtualized honeynet solution which enables you to quickly deploy information resources that appear to be part of your network. These honeynets are closely monitored virtual environments that appear to contain information and services of value to attackers that require very little maintenance. Attivo honeynets host multiple Windows and Linux operating systems running a multitude of applications and services so that attackers would falsely believe they are accessing production networks. Attivo honeynets represent a low maintenance, low false positive detection control to alert you to attackers who have bypassed your perimeter defenses.

If you would like a copy of this whitepaper, please fill out the form on the right side of this page.

Links to Explore

Introducing Active Breach Detection

LightCyber Introducing Active Breach Detection1

LightCyber’s Active Breach Detection identifies active attacks after they have circumvented your threat prevention systems and before they have created a material breach of confidential information. LightCyber uses a combination of (1) machine learning to continuously profile user and device behavior to detect malicious attack behavior on your network, and (2) validates the attack using agentless endpoint analysis. The result of this combination of coordinated network and endpoint analyses is high-quality alerts with a very low rate of false positives. Finally, LightCyber integrates with your prevention controls for remediation.

If you would like a copy of this whitepaper, please fill out the form on the right side of this page.

Links to Explore

Introducing the Cloud-DMZ (TM)

 

Sentrix Cloud-DMZ Picture1Sentrix has introduced a paradigm-shifting architecture for web application security that leverages the cloud as an enterprise protective zone (DMZ) to eliminate the complete range of web application/site attacks including DDoS. In addition, moving deterministic content to the cloud enables easy scalability when needed. Traditional Web Application Firewalls cannot keep up with the rapid changes driven by DevOps and marketing, and therefore devolve into low-value, blacklisting controls. Sentrix’s continous web application/site crawling automatically updates the secure, cloud-based content replica and white list rules to protect business transactions.

If you would like a copy of this white paper, please fill out the form on the right side of this page.

Links to Explore

Next Generation Firewall Best Practices

Cymbel has been providing consulting services related to next generation firewalls since 2007. Based on our experience, we are often asked about “best practices.” Given the variety of deployment scenarios and different priorities of our clients, we have found it difficult to develop a general set of best practices. However, I recently observed that Palo Alto Networks has been adding best practices information to its Administration Guides.

So I thought it might be useful to pull together Palo Alto Networks’ best practices into a single document, which we have done. If you are interested in receiving this document, please let me know by filling out the form on this page, or contacting me via email. If you don’t have my email address, please go to  my LinkedIn page: www.linkedin.com/in/riskpundit/

Perspective on NSS Labs – Palo Alto Networks controversy

I am posting the Comment I wrote on the Palo Alto Networks site in response to Lee Klarich’s post which itself was in response to NSS Labs 2014 report on Next Generation Firewalls.

I have two points to make about the Palo Alto Networks – NSS Labs controversy. One, the NSS Labs Next Generation Firewall Comparative Analysis simply does not pass the smell test. Two, it’s not even clear to me that all of the firewalls tested are actually Next Generation Firewalls.

Regarding my first point, I am a Principal at Cymbel, a Palo Alto Networks reseller since 2007. We work with some of the largest organizations in the United States who have put Palo Alto Networks firewalls through extremely rigorous evaluations for extended periods, and have then deployed Palo Alto firewalls for many years. NSS Labs seems to be saying that all of the people in these organizations are idiots. This does not make sense to me.

In addition, NSS Labs seems to be saying that the Gartner people, who speak with far more firewall customers than we do, and place Palo Alto Networks in the Leader Quadrant and furthest to the right, are also morons. I’m not buying it.

Regarding my second point, at a more basic level, what is NSS Labs’ definition of a Next Generation Firewall? Since I am not a paying customer of NSS Labs, I don’t know. Let me start with the definition of a firewall – the ability to establish a Positive Control Model. In other words, define what network traffic is allowed, and block everything else, i.e. default deny.

In the 1990’s, this was relatively easy because all applications ran on well-defined port numbers. Therefore you could define policies based on port numbers, IP addresses, and protocols to be assured that you had full network visibility and control.

Starting in the early 2000s, this well-behaved order began to break down. Applications were built to share already open ports in order to bypass traditional stateful inspection firewalls. By the mid-2000s, there were hundreds, if not thousands, of applications that share ports, automatically hop from port to port, and use encryption to evade traditional firewalls. Thus, these traditional firewalls were essentially rendered useless, and could no longer support a Positive Control Model.

So a new type of firewall was needed. In order to re-establish a positive control model, this new type of firewall has to monitor all 65,535 TCP and UDP ports for all applications, all of the time. In other words, a firewall that enables you to define which applications are allowed, regardless of the ports on which they run, and block all of the others, known or unknown.

Furthermore, a Next Generation Firewall must enable you to lock a specifically allowed application to specifically allowed port(s), and prevent any other application from running on the port(s) opened for that specific application.

Palo Alto Networks, in 2007, was the first company to ship this new type of firewall that, in 2009, Gartner called a “Next Generation Firewall.” Since then, virtually every firewall vendor in the industry now uses the term. But in reality, which ones actually meet the real definition of a Next Generation Firewall?

I would recommend that NSS Labs release the details of its testing methodology for all to review. By keeping their testing methodology behind a paywall, they are simply feeding into Palo Alto’s “pay to play” contention.

Detecting unknown malware using sandboxing or anomaly detection

It’s been clear for several years that signature-based anti-virus and Intrusion Prevention / Detection controls are not sufficient to detect modern, fast-changing malware. Sandboxing as become a popular (rightfully so) complementary control to detect “unknown” malware, i.e. malware for which no signature exists yet. The concept is straightforward. Analyze inbound suspicious files by allowing them to run in a virtual machine environment. While sandboxing has been successful, I believe it’s worthwhile to understand its limitations. Here they are:

  • Access to the malware in motion, i.e. on the network, is not always available.
  • Most sandboxing solutions are limited to Windows
  • Malware authors have developed techniques to discover virtualized or testing environments
  • Newer malware communication techniques use random, one-time domains and non-HTTP protocols
  • Sandboxing cannot confirm malware actually installed and infected the endpoint
  • Droppers, the first stage of multi-stage malware is often the only part that is analyzed

Please check out Damballa’s Webcast on the Shortfalls of Security Sandboxing for more details.

Let me reiterate, I am not saying that sandboxing is not valuable. It surely is. However, due to the limitations listed above, we recommend that it be complemented by a log-based anomaly detection control that’s analyzing one or more of the following: outbound DNS traffic, all outbound traffic through the firewall and proxy server, user connections to servers, for retailers – POS terminals connections to servers, application authentications and authorizations. In addition to different network traffic sources, there are also a variety of statistical approaches available including supervised and unsupervised machine learning algorithms.

So in order to substantially reduce the risk of a data breach from unknown malware, the issue is not sandboxing or anomaly detection, it’s sandboxing and anomaly detection.

This post has been cross-posted from www.riskpundit.com.

How Palo Alto Networks could have prevented the Target breach

Brian Krebs’ recent posts on the Target breach, A First Look at the Target Intrusion, Malware, and A Closer Look at the Target Malware, provide the most detailed and accurate analysis available.

The malware the attackers used captured complete credit card data contained on the mag stripe by “memory scraping.”

This type of malicious software uses a technique that parses data stored briefly in the memory banks of specific POS devices; in doing so, the malware captures the data stored on the card’s magnetic stripe in the instant after it has been swiped at the terminal and is still in the system’s memory. Armed with this information, thieves can create cloned copies of the cards and use them to shop in stores for high-priced merchandise. Earlier this month, U.S. Cert issued a detailed analysis of several common memory scraping malware variants.

Furthermore, no known antivirus software at the time could detect this malware.

The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.

The key point I want to discuss however, is that the attackers took control of an internal Target server and used it to collect and store the stolen credit card information from the POS terminals.

Somehow, the attackers were able to upload the malicious POS software to store point-of-sale machines, and then set up a control server within Target’s internal network that served as a central repository for data hoovered by all of the infected point-of-sale devices.

“The bad guys were logging in remotely to that [control server], and apparently had persistent access to it,” a source close to the investigation told KrebsOnSecurity. “They basically had to keep going in and manually collecting the dumps.”

First, obviously the POS terminals have to communicate with specific Target servers to complete and store transactions. Second, the communications between the POS terminals and the malware on the compromised server(s) could have been denied had there been policies defined and enforced to do so. Palo Alto Networks’ Next Generation Firewalls are ideal for this use case for the following two reasons:

  1. Palo Alto Networks enables you to include zone, IP address, port, user, protocol, application information, and more in a single policy.
  2. Palo Alto Networks firewalls monitor all ports for all protocols and applications, all of the time, to enforce these polices to establish a Positive Control Model (default deny or application traffic white listing).

You might very well ask, why couldn’t Router Access Control Lists be used? Or why not a traditional port-based, stateful inspection firewall? Because these types of network controls limit policy definition to ports, IP addresses, and protocols, which cannot enforce a Positive Control Model. They are simply not detailed enough to control traffic with a high degree of confidence. One or the other might have worked in the 1990s. But by the mid-2000s, network-based applications were regularly bypassing both of these types of controls.

Therefore, if Target had deployed Palo Alto Networks firewalls between the POS terminals and their servers with granular policies to control POS terminals’ communications by zone, port, and application, the malware on the POS terminals would never have been able to communicate with the server(s) the attackers compromised.

In addition, it’s possible that the POS terminals may never have become infected in the first place because the compromised server(s) the attackers initially compromised would not have been able to communicate with the POS terminals. Note, I am not assuming that the servers used to compromise the POS terminals were the same servers used to collect the credit card data that was breached.

Unfortunately, a control with the capabilities of Palo Alto Networks is not specified by the Payment Card Industry (PCI) Data Security Standard (DSS). Yes, “Requirement #1: Install and maintain a firewall configuration to protect cardholder data,” seems to cover the subject. However, you can fully meet these PCI DSS requirements with a port-based, stateful inspection firewall. But, as I said above, an attacker can easily bypass this 1990s type of network control. Retailers and e-Commerce sites need to go beyond PCI DSS to actually protect themselves. You need is Next Generation Firewall like Palo Alto Networks which enables you to define and enforce a Positive Control.

This post has been cross-posted from www.riskpundit.com.