[foaf-protocols] [foaf-dev] # # in urls

Story Henry henry.story at bblfish.net
Tue Sep 29 10:28:43 CEST 2009


(Peter Williams , this thread should be on the foaf-protocols mailing  
list. foaf-dev is more for talk on the foaf ontology. You can subscribe
here: http://lists.foaf-project.org/pipermail/foaf-protocols/
)

On 28 September Peter Williams wrote to foaf-dev:
> If the typical webid is http://peter.com/foaf.rdf#me and said webid  
> was to be placed in the self-signed cert, what Id still like to do  
> is allow the self-signed cert to bear http://peter.com/ 
> foaf.rdf#me#<hash> instead. The idea here is that the foaf+ssl  
> protocol engine on the consumer side would only process the SSL  
> client cert IF it is able to resolve a foaf file matching the hash.

> My understanding of theory (which may of course differ from  
> practice) is that folks receiving  #me#<hash> tag should be able to  
> treat it opaquely during inferencing - assuming the engine has no  
> knowedge of the resource authority's meaning (of subcomponent). And,  
> if the foaf file rules are appropriate designed, #me#<hash> may be  
> specified to be equivalent to #me, of course (as may #me#<hash2>).

The problem with the above is that you would be putting semantics into  
the hash tag that it should not have. And as Toby pointed out, you'd  
have a very big problem calculating the hash of the representation, as  
the hash URL would have to appear in that representation you were  
trying to calculate the hash for. So you'd have to know the answer of  
your hash before you calculated it.

On 29 Sep 2009, at 00:16, Toby Inkster replied with the following  
improvement:

> So, assuming that <http://tobyinkster.co.uk/> had a SHA1 hash
> of "c70e773f954a522ba14a1ee32c192eea20f096e1", then the  
> subjectAltName
> of my X.509 certificate might be:
>
> 	subjectAltName: email:mail at tobyinkster.co.uk,
> 	    URI:http://tobyinkster.co.uk/#i,
> 	    URI:urn:sha1:c70e773f954a522ba14a1ee32c192eea20f096e1#i,
> 	    registeredID:1.3.6.1.4.1.33926.0.0
>
> The main problem of that, as I see it, is determining what is the SHA1
> hash of <http://tobyinkster.co.uk/>? Requesting that URL with an  
> HTTP
> Accept header of "text/html" will result in a different stream of  
> bytes
> than if you request it as "text/turtle".
>
> And if you change your FOAF file frequently?

yes Toby I agree, both of those are terminal criticisms of that idea.   
If everytime you changed your foaf you'd have to change your hash, and  
so change your cert, a lot of uses of foaf+ssl would be lost.

Also it is not quite clear why the <urn:sha1:...> URI is an  
alternative name of the subject. It is the signature of a  
representation, but an alternative name?

Finally I am not sure what this brings you in added security wise. The  
foaf file should be served up using https.

> No, there must be a better way of doing this. Here's an idea, and I
> don't know if it would work, but how about you serve your FOAF file up
> using HTTPS instead of plain old HTTP?

Indeed that is pretty much required in foaf+ssl for security. One can  
do without, but one should clearly reduce one's trust in the resulting  
identification.

> (If you've got an "http://"
> WebID, then you can simply throw an HTTP redirect.) That way, the
> transaction between the server that hosts your FOAF, and any clients
> that request it, is transmitted over HTTPS - therefore *signed* by the
> server's key. The server's key can then be signed by your personal X. 
> 509
> key.

This would be very interesting, but would not work too well with  
current browsers.
Imagine that the foaf file is in html with rdfa markup. Then clearly  
you would wish for that file to also be read by a browser. But if the  
html is served over https with an X509 signed by personal certificate,  
that is not itself signed by some CA, then the current browsers will  
throw a security exception with some very dire warning messages.

Also what the X509 serving the rdfa is doing is guaranteeing that the  
server one is connected to is indeed the one wanted to connected to.  
So part of the importance of CA certificates, is that they help verify  
that one's DNS has not been poisoned. I don't think a CA signed by a  
personal self signed key would do that.

> So, typical use case... you request protected resource A with your X. 
> 509
> key. The server hosting A checks your X.509 key's subjectAltName and
> finds your WebID. It does a request for this WebID and gets a 303
> redirect to an HTTPS address we'll call B.

((If the WebId is https, then of course there is no need for this  
redirect.))

  So if you are concerned about security, you are already in a mess at  
this point. Because the redirect could be changed by a man in the  
middle attack - or indeed the DNS could have been poisoned, and the  
server B be a fake server.

> The server hosting A requests B and gets back a FOAF file as  
> response; it checks B's server
> certificate is signed by the same certificate that originally  
> requested
> A;

So this requires well established CA's to sign personal certificates,  
or for them to give domain owners certificates that can then sign  
other certificates.

For this to work correctly, one would need a rule to say that EVERY  
file written by X, should be signed by X's certificate. Otherwise it  
is not clear when this rule should be applied, ie when one should do  
this test, and when one should reject the response.

Why should it be applied here for example?

Ok, so perhaps that would just be an extra security step, for those  
who want it.


> and it checks that the exponent and modulus in the FOAF data match
> the X.509 certificate that originally requested A.
>
> It should work; and it should be compatible with the existing FOAF+SSL
> model, just providing an extra level of security for those who want  
> it.

Yes, this is an interesting idea to work on. With the caveat that it  
has some issues that would need to be worked out, as I mention above.


Henry



More information about the foaf-protocols mailing list