Forrester Pushes ‘Zero Trust’ Model For Security – DarkReading

Forrester Pushes ‘Zero Trust’ Model For Security – DarkReading.

Last week Forrester Research began promoting a new term, “Zero Trust,” to define its new security model. The new model’s underlying principle is “trust no one.” In other words, you cannot trust the servers and the workstations inside your network any more than you could trust external third parties.

Given the nature of the changes we’ve seen during the last 3 to 5 years in technology and the threat landscape, we agree. We have seen a huge increase in what we call “inside-out” attacks where insiders are lured to malware-laden web pages on, for example, Facebook, Twitter, YouTube, and even the New York Times. The malware gets downloaded to the unsuspecting person’s workstation along with the normal content on the web page. From there, the malware steals the person’s credentials to access bank accounts, internal intellectual property, customer records, or whatever the attackers can readily convert to cash. This type of malware is not the traditional single-purpose virus or worm. Rather it’s an agent controlled by remote servers that can modify its functions. These “bots” have gone undetected for days, weeks, months, even years.

From a security perspective, this type of attack looks very similar to a malicious insider, and information security must protect against it along with the traditional “outside-in” attack method.

From my perspective, Forrester’s Zero Trust model and Cymbel’s next-generation defense in-depth architecture are the same when it comes to network security. Our Approach, based on the SANS 20 Critical Security Controls for Effective Cyber Defense, is broader.

However, there is one area where I disagree somewhat with John Kindervag, the Forrester analyst discussing the Zero Trust model, who is reported to have said:

It’s like a UTM [unified threat management] tool or firewall on steroids,” he says. It does firewall, IPS, data leakage protection, content filtering, and encryption with a 10-gigabit interface that separates the switching fabrics for each function.

Gee, how did he leave out packet shaping? I have no doubt that there are vendors attempting to do all these functions in a single appliance, but it reminds me of Network Access Control in 2007. NAC was going to subsume all manner of security functions in a single appliance. The complexity was overwhelming. Furthermore, most organizations really don’t want all that functionality in one box. There is still the need for a defense-in-depth architecture, in our opinion.

Some level of function consolidation is surely reasonable and advantageous to organizations with limited resources, i.e. everyone!! However the expertise needed to develop and advance all of these different functions is virtually impossible to assemble in one company. For example, full packet capture is really about innovative data storage and retrieval. High performance, stream-based, application level, firewall/IPS is about innovative deep-packet inspection combined with clever hardware design. And data loss prevention requires proxies and semantics-based data classification algorithms.

While I am surely not saying that we can achieve nirvana now, the components of Cymbel’s next-generation defense-in-depth architecture can provide major improvements in network security today:

  • Next-Generation Firewall with application- and user-level, internal network segmentation, integrated intrusion prevention, and bandwidth management – Palo Alto Networks
  • 0-day threat and botnet command & control communications prevention – FireEye
  • Cloud-based web and email security – Zscaler
  • Device/software discovery and configuration change detection – Insightix, AccelOps
  • High Performance Full Packet Capture – Solera Networks
  • Layer 2, 3, 4 encryption – Certes Networks
  • User-based, behavioral anomaly detection using net flows and logs plus high-performance event correlation – Lancope

I look forward to learning more about Forrester’s Zero Trust model and working with partners who recognize the new landscape and respond with creative solutions for our clients.

“It’s like a UTM [unified threat management] tool or firewall on steroids,” he says. It does firewall, IPS, data leakage protection, content filtering, and encryption with a 10-gigabit interface that separates the switching fabrics for each function

Mitre releases log standards architecture – Common Event Expression (CEE)

Finally, on August 27, 2010, Mitre’s log standard, Common Event Expression Architecture Overview was released. The goal of CEE is to standardize event logs to simplify collection, correlation, and reporting which will drive down the costs of implementing and operating Log Management controls and improve audit and event analysis.

At present there are no accepted log standards. Each commercial application and security product implements logs in a proprietary way. In addition, the most commonly used log transport protocol, syslog, is unreliable since it’s usually implemented on UDP. The custom application environment is even worse as there are no accepted standards to guide application developers’ implementation of logs for audit and event management.

Why after ten years of log management efforts are there still no standards? In my opinion, it’s because government agencies and enterprises have not recognized that they are indirectly bearing the costs of the lack of standardization. Now that log management has become mandatory for compliance and strongly recommended for effective cyber defense, organizations will realize the need for log standardization. Initially, it’s going to be up to the Federal Government and large enterprises to force CEE compatibility as a requirement of purchase in order to get product manufacturers to adhere to CEE. The log management vendors will embrace CEE once they see product manufacturers using it.

Here is the Common Event Expression Architecture Overview (CEE AO) Abstract:

This Common Event Expression (CEE) Architecture defines the structure and components that comprise the CEE event log standard. This architecture was developed by MITRE, in collaboration with industry and government, and builds upon the Common Event Expression Whitepaper [1]. This document defines the CEE Architecture for an open, practical, and industry-accepted event log standard. This document provides a high-level overview of CEE along with details on the overall architecture and introduces each of the CEE components including the data dictionary, syntax encodings, event taxonomies, and profiles. The CEE Architecture is the first in a collection of documents and specifications, whose combination provides the necessary pieces to create the complete CEE event log standard.
KEYWORDS: CEE, Logs, Event Logs, Audit Logs, Log Analysis, Log Management, SIEM

There are four components of the CEE Architecture – CEE Dictionary and Taxonomy (CDET), Common Log Syntax (CLS), Common Log Transport (CLT), and Common Event Log Recommendations (CELR).
  • Common Log Syntax (CLS) – how the event and event data is represented. The event syntax is what an event producer writes and what an event consumer processes.
  • CEE Dictionary – defines a collection of event fields and value types that can be used within event records to specify the values of an event property associated with a specific event instance.
  • CEE Taxonomy – defines a collection of “tags” that can be used to categorize events. Its goal is to provide a common vocabulary, through sets of tags, to help classify and relate records that pertain to similar types of events.
  • Common Event Log Recommendations (CELR) – provides recommendations to developers and implementers of applications or systems as to which events and fields should be recorded in certain situations and what log messages should be recorded for various circumstances. CELR provides this guidance in the form of a machine-readable profile. The CELR also defines a function – a group of event structures that comprise a certain capability. For example, a “firewall” function can be defined consisting of “connection allow” and “connection block” event structures. Similarly, an “authentication management” function can be composed of “account logon,” “account logoff,” “session started,” and “session stopped.”
  • Common Log Transport (CLT) – provides the technical support necessary for an improved log transport framework. A good framework requires more than just standardized event records, support is needed for international string encodings, standardized event record interfaces, and reliable, verifiable log trails. In addition to the application support, the CLT event streams supplement the CLS event record encodings to allow systems to share event records securely and reliably.
The CEE Architecture Overview document also defines the CEE “product” approval management process and four levels of CEE conformance.

CEE holds the promise of driving down the costs of implementing Log Management systems and improving the quality of audit and event analysis. However, there is still much work to be done for example in defining Taxonomies and defining and testing interoperability at the Transport and Syntax levels.

Mitre has had mixed results over the years in it’s efforts to standardize security processes. CVE (Common Vulnerabilities and Exposures) has been it’s biggest success as virtually all vulnerability publishers use CVE numbers. CEE is much more ambitious though and will require more money and resources than Mitre is accustomed to having at its disposal.

Enhanced by Zemanta

SIEM: Moving Beyond Compliance

Dr. Anton Chuvakin recently wrote a white paper for RSA entitled, SIEM: Moving Beyond Compliance. While I am no fan of RSA’s Envision product (Cymbel partners with AccelOps), the white paper is quite good. As its title says, it discusses “use cases” for SIEM beyond the basic compliance requirements that drive a lot of SIEM projects. Here is the list with my comments:

  • Server user activity monitoring – It’s not always possible to collect the logs from all servers. Sometimes a network-based product like PacketMotion is needed to complement log collection.
  • Tracking user actions across disparate systems – Same comments as above.
  • Comprehensive firewall monitoring – Key capability needed by the SIEM is Active Directory integration for mapping IP addresses to users and generating reports by AD groups.
  • Malware protection – I think this would be better termed “Malware behavior detection” since a SIEM cannot actually detect malware itself as an Intrusion Protection/Detection System would. Ideally, the SIEM should provide a behavior anomaly detection capability.
  • Web server attack detection – A SIEM can provide “detection” capabilities to complement the “protection” capabilities of a Web Application Firewall (Cymbel partners with Barracuda) whose logs also ought to be captured and correlated.
  • Incident response enablement – In addition to SIEM, Cymbel recommends a Full Packet Capture product be deployed. Cymbel partners with Solera Networks.

Anton closes with the three “worst practices” he has seen. Based on my six years of SIEM experience, I agree:

  • Storing logs for too short a time
  • Trying to prioritize logs and store “just what’s important”
  • Trying to use advanced SIEM features before establishing success with basic log collection and reporting

NetworkWorld discusses next-generation log management

As over-used as the term “next-generation” is, it is a genuinely valid adjective to describe a new class of log management products. Jon Oltsik, in a recent Network World opinion piece describes several of the characteristics of next-generation log management – consolidation of logs and flows, location awareness, and deeper granular visibility.

I would go further. Having been involved in log management and security information and event management since 2002, the value add (beyond compliance reporting) is actionable intelligence. And actionable intelligence depends on adding context to the logs. One way, as Jon states, is correlating logs with flows. But context can further be enhanced by adding configuration, availability, performance, virtualization, and business service information.

The first and only product I know of that does all this for less than seven figures is from AccelOps. They did a nice blog post that goes into more detail.