Due to the adoption of new technologies and the changes in motives and methods of cyber adversaries, traditional technical controls and processes do not effectively reduce the risks of confidential data breaches.
In September, 2010, Forrester launched the concept of the Zero Trust Model. Forrester’s John Kindervag summed it up this way:
The Zero Trust Model of information security simplifies how information security is conceptualized by assuming there are no longer “trusted” interfaces, applications, traffic, networks or users. It takes the old model — “trust but verify” — and inverts it, since recent breaches have proven when an organization trusts, it doesn’t verify.
As a visionary, Forrester is calling for a complete re-architecture of datacenters, placing security at the center. While we applaud their vision and surely look forward to the “integrated segmentation gateway as the nucleus of the network,” Cymbel focuses on the opportunities available right now for enterprises to reduce the risks associated with modern malware by re-thinking and re-implementing a defense-in-depth architecture using the Zero Trust approach.
Teams within enterprises, with and without the support of Information Technology management, are embracing new technologies in the constant quest to improve business and personal effectiveness and efficiency. These technologies include virtualization, cloud computing, converged data, voice, and video networks, Web 2.0 applications, social networking, smartphones, and tablets. In addition, the percentage of remote and mobile workers in organizations continues to increase and reduce the value of physical perimeter controls.
The motives of attackers have shifted from fame and glory to cash. It’s no longer about who can build the fastest spreading worm. It’s about stealing information that can be sold for money. The attackers are more organized than ever. There is a complex underground economy including criminal software developers and bot herders financed by criminal gangs and nation-states. In addition, we have seen the rise of hacktivists motivated by political beliefs.
The primary vector of attackers has shifted from “outside-in” to “inside-out.” Formerly, the primary attack vector was to directly penetrate the enterprise at the network level through open ports and to exploit operating system vulnerabilities. We call this attack methodology outside-in.
In the last several years, with the increased popularity of social networking and remote and mobile work, the primary attack vector shifted to enticing users to malware-infested web pages capable of compromising users’ devices via their browsers. (While enticing a user to click on a link in a phishing email is still very popular, simply opening a malicious HTML email can be enough to compromise your device.) We call this inside-out because the user inside the “protected” network reaching out to an external web site can be just as vulnerable as the user accessing the Internet from home.
Zero Trust Guidelines
While all organizations have different priorities, over the years these guidelines have worked well:
- Balance Budget across Prevention, Detection, and Response Controls - If Zero Trust means anything it means that you cannot prevent all end points and servers from becoming compromised 100% of the time. Therefore you must acknowledge that there are compromised network-attached devices in your organization. This means you must invest resources in detecting and responding to them. Too many organizations are over-weighted on Prevention controls and under-weighted on Detection and Response Controls and therefore need to rebalance.
- Use a Kill Chain model to select technical controls - At this point in time, there is no single technical control that can protect the enterprise from advanced persistent adversaries. Select technical controls to cover as many stages of the attacker process (Kill Chain) as makes economic sense.
Zero Trust Recommendations
Cymbel has developed a set of practical Zero Trust recommendations that can be implemented today.
1. Update Network Security with Next Generation Firewalls - A true Next Generation Firewall will enable you to re-establish a Positive Control Model that includes remote and mobile users, and provides threat protection across the Kill Chain for known and unknown attacks. Only a Next Generation Firewall that can detect all applications across all 65,535 TCP and UDP ports, all of the time, will enable the re-establishment of a Positive Control Model.
A high-function next gen firewall will also reduce overall network security costs by (1) eliminating the need for stand-alone Proxies, IPS/IDSs, and VPNs, and (2) unifying policy management. The money saved here can be applied to Detection and Response controls.
While the logical perimeter is an obvious deployment scenario, re-establishing internal network segmentation is also important. Much as a submarine is compartmentalized so that if one compartment floods, it does not sink the ship, segmenting your internal network up and down the stack will control user access to assets and limit the access of compromised systems. While VLANs have value with respect to performance, their security capabilities cannot stand up to the current threat landscape and compliance requirements. VLANs are comparable to the double yellow lines on a road – they provide guidelines but no real protection. Learn more.
2. Use a “sandbox” control to detect unknown threats in files – The speed at which threats morph is so high, seconds/minutes, that signature-based threat detection controls like anti-virus cannot keep up. Nor can signatures detect targeted threats created to exploit unknown and zero-day vulnerabilities. Therefore all unknown files entering the organization from the Internet, regardless of port, protocol, or application, must be analyzed by allowing them to “detonate” in a safe environment, a “sandbox.” This can be done on-premise on an appliance or via a cloud-based service, ideally tightly integrated with the Next Generation Firewall. Learn more.
3. Use a specialized anti-phishing email protection service - Phishing and spear-phishing continue to be a top attack vector for adversaries to trick users to click on links leading to malicious web pages. Traditional anti-spam services no longer provide enough email protection. They are no match for sophisticated phishing and spear-phishing attacks. An effective cloud-based service dedicated to blocking targeted email attacks is needed. Also the outbound links in the email must be analyzed before allowing the user to download a possibly malicious web page. Learn more.
4. Use Threat Intelligence to prioritize vulnerability remediation - Vulnerability scanners generate large numbers of vulnerabilities which tend to overwhelm the limited resources dedicated to remediation. A variety of risk scoring methods to prioritize remediation with limited success. Asset tagging/ranking is valuable but insufficient. We recommend applying Threat Intelligence in conjunction with asset tagging/ranking to improve the risk scoring process and better prioritize remediation. Learn more.
5. Analyze logs using advanced machine learning algorithms to detect compromised and malicious users – A typical early step in an attack progression after compromising an endpoint is to escalate privilege by capturing the user’s credentials. From that point forward, no malware is needed as the attacker is using legitimate credentials to access information. In order to detect this activity, we recommend a behavior analysis control. During the last ten years there have been tremendous advances in machine learning algorithms to detect anomalous user behavior and attributes while minimizing false positives. Learn more.
6. Implement an Incident Management system to minimize incident costs - Incidents are inevitable, period. All threats cannot be detected rapidly enough to eliminate incidents. Furthermore, effective incident response is difficult due to (1) limited staff, (2) the variety and complexity of state and federal laws, and (3) the range of external and internal constituencies affected by a security incident. Mistakes and omissions will surely increase incident direct and indirect costs. Therefore, enterprises must invest in a an automated system to better prepare for, assess, manage, and report on incidents. Learn more.
7. Deploy a Cloud Services Manager to discover, analyze, and control Shadow IT - Employee teams are increasingly using cloud services on their own to improve their effectiveness and efficiency. Cloud services are easy to use, have a fast time to value, and use a pay-per-use (OpEx) pricing model. In most cases, these employee teams do not feel the need to ask for IT’s permission to deploy them. The result is that management has no visibility or control over these “Shadow” IT services. This introduces security, compliance, and legal risks.
A Cloud Services Manager ingests logs from existing firewalls, proxies, web security gateways, and SIEMs to (1) identify all cloud services being used by employee teams, (2) analyze the risks associated with them, and (3) re-establish centralized control where appropriate using a a reverse proxy. Learn more.
8. Monitor your Supply Chain for breaches using a cloud-based service - As organizations improve their information security controls, attackers are increasingly shifting their focus to organizations’ suppliers. An attacker uses an organization’s supplier as a pivot into the targeted organization. A large organization can easily have thousands of suppliers who may not have adequate controls in place. Information Security audits may be an option, but aside from the expense, audits only provide periodic points-in-time analysis.
A cloud-based service focused on intercepting and collecting data from within live botnets can detect compromised devices at your suppliers without requiring the supplier to deploy controls they might not be able to afford or manage. For a more comprehensive analysis, the supplier can provide proxy or web security gateway logs to be correlated with the data collected from the botnets.
This cloud based service would also be appropriate for organizations like insurance companies who want to monitor their brokers and agents.
9. Deploy an Enterprise Key & Certificate Management (EKCM) system - Some of the less visible bi-products of the trend toward broadly enabling encryption include more encryption certificates and keys, and more variations in the way applications, platforms and systems are configured to encrypt. An Enterprise Key and Certificate Management (EKCM) system allows organizations to more effectively implement and maintain encryption throughout their varied, and often disparate environments. An EKCM system can provide measurable improvements in operational efficiency, system uptime, compliance measurement, audit readiness and overall data security. Learn more.
10. Deploy a backup, cloud-based DDoS Mitigation Service - In the last couple of years we’ve seen an increase in the number of Distributed Denial of Service attacks due in part to hacktivist activities, and a dramatic increase in the size of DDoS attacks. On premise DDoS appliances can protect web servers but do not protect the enterprise’s communications pipes. In addition, most organizations that are using a cloud-based DDoS Mitigation Service, rely on one of the two major services. This represents Concentration Risk. A secondary service designed specifically for back-up can reduce this risk. Learn more.
11. Deploy a non-signature-based endpoint malware detection control - At this point in time, it’s clear that traditional signature-based endpoint malware detection controls are no longer able to detect the majority of threats generated by attackers. A another endpoint malware detection control that resides in the application space simply increases the attack surface, i.e. it can be detected and disabled by the attacker. Therefore the new control must reside in the operating system kernel to be undetectable by attackers. Second, the behavior analysis portion of the control must be done off the endpoint lest performance be impacted. Learn more.