OAuth – the privacy time bomb

Andy Baio writes in Wired about the privacy dangers of OAuth.

While OAuth enables OAuth Providers to replace passwords with tokens to improve the security of authentication and authorization to third party applications, in many cases it gives those applications access to much more of your personal information than is needed for them to perform their functions. This only increases the risk associated with breaches of personal data at these third party application providers.

Andy focuses on Gmail because the risk of using them as an OAuth Provider is greater. As Andy says:

For Twitter, the consequences are unlikely to be serious since almost all activity is public. For Facebook, a mass leak of private Facebook photos could certainly be embarrassing. But for Gmail, I’m very concerned that it opens a major security flaw that’s begging to be exploited.

“You may trust Google to keep your email safe, but do you trust a three-month-old Y Combinator-funded startup created by three college kids? Or a side project from an engineer working in his 20 percent time? How about a disgruntled or curious employee of one of these third-party services?”

If you are using your GMail (Google) credentials to just authenticate to a third party application, why should the third party application have access to your emails? In the case of Xobni or Unsubscribe, for example, you do need to give them access rights because they are providing specific functions that need access to Gmail content. But why does Unsubscribe need access to message content when all it really needs is access to email senders? When you decided to use Unsubscribe, why can’t you limit them to only your Senders? The bottom line is that by using OAuth you are trusting the third party applications not to abuse the privileges you are giving them and that they have implemented effective security controls.

While Andy provides some good advice to people who use their Google, Twitter, or Facebook credentials for other applications, there is no technical reason for the third party applications to get access to so much personal information. In other words, when you allow a third party application to use one of your primary applications (OAuth Providers) for authentication and/or authorization, you should be able to control the functions and data to which the third party has access. In order for this to happen, the Googles, Facebooks, and Twitters must build in more fine-grained access controls.

At present, the OAuth providers do not seem to be motivated to limit access to user content by third party applications based on the needs of those applications. One reason might be that most users simply don’t realize how much access they are giving to third party applications when they use an OAuth Provider. With no user pressure requesting finer grained access, why would the OAuth Providers bother?

Aside from lack of user pressure, it seems to me that the OAuth Providers are economically motivated to maintain the status quo for two reasons. First, they are competing with each other to become the cornerstone for their users’ online lives and want keep the OAuth user interface as simple as possible. In other words, if authorization is too fine grained, users will have too many choices and will decide not to use that OAuth Provider. Second, the OAuth Providers want to keep things as simple as possible for third party developers to attract them.

I would hate to see the Federal Government get involved to force the OAuth Providers to provide more fine-grained access control. But I am afraid that a few highly publicized breaches will have that affect.

As Enterprises are moving to a Zero Trust Model, so must consumers.

 

 


 

 

 

 

 

 

Adopt Zero Trust to help secure the extended enterprise

John Kindervag, a principal analyst at Forrester, has developed an interesting approach to securing the extended enterprise. He calls it the Zero Trust Model which he describes in this article: Adopt Zero Trust to help secure the extended enterprise.

First,  let me say I am not connected to Forrester in any way. I am connected to John Kindervag on LinkedIn based on a relationship from a prior company.

Second, the Zero Trust Model rings true for me in that the incident data available for review shows that we must assume that prevention controls can never be perfect. We must assume that (1) devices will be compromised including user authentication credentials and (2) some users interacting with systems will behave badly either accidentally or on purpose.

John uses the term Extended Enterprise to refer to an organization’s functional network which extends to (1) remote and mobile employees and contractors connecting via smartphones and tablets as well as laptops, and (2) business partners.

The Zero Trust Model of information security simplifies how information security is conceptualized by assuming there are no longer “trusted” interfaces, applications, traffic, networks or users. It takes the old model — “trust but verify” — and inverts it, since recent breaches have proven when an organization trusts, it doesn’t verify.

Here are the three basic ideas behind the Zero Trust Model:

  1. Ensure all resources are accessed securely – regardless of location
  2. Adopt the principle of least privilege, and strictly enforce access control
  3. Inspect and log all traffic

Here are Kindervag’s (Forrester) top recommendations:

  • Conduct a data discovery and classification project
  • Embrace encryption
  • Deploy NAV (Network Analysis & Visibility) tools to watch dataflows and user behavior
  • Begin designing a zero-trust network
The article provides some detail on each of these key ideas and recommendations.

Forrester Pushes ‘Zero Trust’ Model For Security – DarkReading

Forrester Pushes ‘Zero Trust’ Model For Security – DarkReading.

Last week Forrester Research began promoting a new term, “Zero Trust,” to define its new security model. The new model’s underlying principle is “trust no one.” In other words, you cannot trust the servers and the workstations inside your network any more than you could trust external third parties.

Given the nature of the changes we’ve seen during the last 3 to 5 years in technology and the threat landscape, we agree. We have seen a huge increase in what we call “inside-out” attacks where insiders are lured to malware-laden web pages on, for example, Facebook, Twitter, YouTube, and even the New York Times. The malware gets downloaded to the unsuspecting person’s workstation along with the normal content on the web page. From there, the malware steals the person’s credentials to access bank accounts, internal intellectual property, customer records, or whatever the attackers can readily convert to cash. This type of malware is not the traditional single-purpose virus or worm. Rather it’s an agent controlled by remote servers that can modify its functions. These “bots” have gone undetected for days, weeks, months, even years.

From a security perspective, this type of attack looks very similar to a malicious insider, and information security must protect against it along with the traditional “outside-in” attack method.

From my perspective, Forrester’s Zero Trust model and Cymbel’s next-generation defense in-depth architecture are the same when it comes to network security. Our Approach, based on the SANS 20 Critical Security Controls for Effective Cyber Defense, is broader.

However, there is one area where I disagree somewhat with John Kindervag, the Forrester analyst discussing the Zero Trust model, who is reported to have said:

It’s like a UTM [unified threat management] tool or firewall on steroids,” he says. It does firewall, IPS, data leakage protection, content filtering, and encryption with a 10-gigabit interface that separates the switching fabrics for each function.

Gee, how did he leave out packet shaping? I have no doubt that there are vendors attempting to do all these functions in a single appliance, but it reminds me of Network Access Control in 2007. NAC was going to subsume all manner of security functions in a single appliance. The complexity was overwhelming. Furthermore, most organizations really don’t want all that functionality in one box. There is still the need for a defense-in-depth architecture, in our opinion.

Some level of function consolidation is surely reasonable and advantageous to organizations with limited resources, i.e. everyone!! However the expertise needed to develop and advance all of these different functions is virtually impossible to assemble in one company. For example, full packet capture is really about innovative data storage and retrieval. High performance, stream-based, application level, firewall/IPS is about innovative deep-packet inspection combined with clever hardware design. And data loss prevention requires proxies and semantics-based data classification algorithms.

While I am surely not saying that we can achieve nirvana now, the components of Cymbel’s next-generation defense-in-depth architecture can provide major improvements in network security today:

  • Next-Generation Firewall with application- and user-level, internal network segmentation, integrated intrusion prevention, and bandwidth management – Palo Alto Networks
  • 0-day threat and botnet command & control communications prevention – FireEye
  • Cloud-based web and email security – Zscaler
  • Device/software discovery and configuration change detection – Insightix, AccelOps
  • High Performance Full Packet Capture – Solera Networks
  • Layer 2, 3, 4 encryption – Certes Networks
  • User-based, behavioral anomaly detection using net flows and logs plus high-performance event correlation – Lancope

I look forward to learning more about Forrester’s Zero Trust model and working with partners who recognize the new landscape and respond with creative solutions for our clients.



“It’s like a UTM [unified threat management] tool or firewall on steroids,” he says. It does firewall, IPS, data leakage protection, content filtering, and encryption with a 10-gigabit interface that separates the switching fabrics for each function