[foaf-protocols] report on EV and SSL MITM proxying

Henry Story henry.story at bblfish.net
Tue Mar 8 18:52:05 CET 2011


You still don't address the main issue.

Your claim is that Man in the Middle attacks are easy. My claim is : only if the firewall or proxy owns the client. That is the firewall/proxy Man in the Middle has to put a certificate on the users computer.

If it can do that then it is either a friend or a virus.

If it is a virus, then TLS is not the issue. If it is your larger self (you company) then this is correct behaviour.

If your computer belongs to your company, then you are speaking for them, and they should have told you not to do any personal business on their machines. It is common sense security practice: you cannot be secure on someone else's machine if you don't trust them. 

So you should use your own machine for private business, and if you want to be real secure download an open source OS too.


On 8 Mar 2011, at 18:28, peter williams wrote:

> Feel free to make it shorted for webid IX, if you want.
>  
> Folks need to get their head around the fact that one proxy (the corporate, say)  can collude with another at the ISP (say), which operates at the ISP as the corporate’s own firewall (much as the corporate operates a firewall for its users’ browsers). It’s a cascade of proxies, and proxies can collude (to build their mutual trust model, at the transport layer). This is all an expression of long-term security policy doctrines for hwo to buiodl “trusted networks”, going back at least 30 years. it’s even got a name: the connection-oriented abstraction: the delivery of the illusion of end-end connectivity, enabling global security policy for the entire net to control local behaviors, because local agents can be trusted to enforce the global rules on local interactions.
>  
> So, in our case,  ISP re-signs server cert received from resource server, for consumption on the (SSL )transport bridge between the 2 firewalls (ISP to corporate). The final firewall in the chain (corporate) also re-signs the server cert received from the firewall one up (which resigned the real site), finally meeting the requirements trust anchors on the browser – which now “just don’t whine” to the user. After all, that all anyone really wants: a “whine free” security policy.
>  
> Browser never knows that there were 2 handoffs (or more) between SSL MITMing agents (acting quite property, or otherwise). It just presents an address bar; a UI methaphor,  where the user is supposed to trust the browser’s root store. In military office systems, the https browser doesn’t do that of course, since its root store is on the smartcard (not in the browser) – there is no EV in military office systems, note (being a meaningless assurance). The user has to  trust the admin of the smartcard, whose file contents (of trust anchors) are controlled by the LAN, in general. They can change, each day, for all it matters; the browser just  goes on saying: all is good in the garden. Today, the CA is trusted (by the LAN), tomorrow it’s not. Users are blissfully unaware, and not involved in trust decision making.

Good so if you use a smartcard to store a list of CAs you trust, then presumably behind corporate firewalls you would be blocked. Which shows that SSL works fine. And if you use a client cert, you won't be able to login. Also good. That's what you want! 

>  
> If you are interested, go see Opera 11’s latest marketing about its new address bar, and how much “safer” the user now is – because some reputation service (third party, I assume) qualifies the server cert. Though the user may not bother to note the change of issuer name (and cert fingerprint) in the really technical (and hard to find) cert dialog, presumably a reputation web-service can – and now warn the user that the site is being spoofed by (properly acting firewalls/SSL proxies) – given crowd-sourced inputs about the true fingerprint of SSL certified sites. Won’t be long before that reputation service gets “assimilated” though!
>  
> While this all matters to us in the https scheme used in webid (since it controls the integrity of the foaf card being evaluated by the RDF engine enforcing the cert/rsa ontology), it also matters to use in the very presentation of the client authn signature and supporting cert to the resource server. This was Ryan’s point (not me, the annoying one): perhaps TLS client authn is inherently not viable, he asserted (since the proxies in the chain canNOT merely vector upstream what the browser actually signed (2 hops back), since SSL handshake at the resource server detects the very act of channel-tampering per its design in delivering what is called a “connection integrity” security service).

Presumably you will get exactly the same political pressures put on any other system of cryptography, and all  kinds of other issues of blocking of content encrypted, so that you will end up having the same problem. If someone who pays you wants to see what you are doing they will control the software no matter what, or block your communication. Or you'll have to bypass them sureptitiously. A standard body can't help you there.

>  
> Thus, to keep the world of transport proxies, sacrifice the viability of the end-end client authn procedure of SSL. Its only viable in the local intranet. To get further, one has to use websso intermediaries (just like ADFS v2!)
>  
> We are conflating 4 issues:
>  
> 1.       Consent to be notified of interception
> 2.       UI issues and completeness against phishing
> 3.       Proxies re-casting SSL domain-name endpoint certs
> 4.       Proxies interfering with the delivery of client authn signature and certs
>  
> It’s hard. But, this is the nature of https. Remember 2 things: it’s not for lack of trying that, 15 years later, client certs are still not viable in the public space – because of proxies and their ability to setup crypto tunnels that are not “intermediated”; and, the reality of https is not even specified formally, mostly because nobody knows how, being such a complex web of social/trust space topics that “evolved,” rather than were designed.

I don't think that's the reason. As I showed we can help corporate firewalls do WebIDs easily.

>  
> But, 15 years later, it’s all better than it was, point by point by point. AS things  go mainstream, there has to be consolidation, give and take, relevant to a scale of 6 billion vs the 600 banks the technology was supposed only to originally go to!
>  
>  
> From: public-xg-webid-request at w3.org [mailto:public-xg-webid-request at w3.org] On Behalf Of Henry Story
> Sent: Tuesday, March 08, 2011 7:09 AM
> To: peter williams
> Cc: public-xg-webid at w3.org
> Subject: Re: report on EV and SSL MITM proxying
>  
>  
> On 8 Mar 2011, at 16:02, peter williams wrote:
> 
> 
> I think we are about 60% understanding https.
>  
> You said that in the previous mail where I answered your points one by one.
> 
> 
>  
> We are understanding now that not only can the outgoing corporate firewall be an attacker, so can any reverse proxy on the path. There may be n of them. Each one is semantically-attacking the end-end user model of https, just as each one has the  poisoning document caches. (This is the security way of looking at the web architecture J )
>  
> It can't be an attacker without putting a certificate on your machine, which it can only do if the machine is owned by the same organisation as the one that owns the firewall.
>  
> Can you answer this point, without a huge song and dance?
>  
> Henry

Social Web Architect
http://bblfish.net/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.foaf-project.org/pipermail/foaf-protocols/attachments/20110308/984b8019/attachment-0001.htm 


More information about the foaf-protocols mailing list