[foaf-protocols] [foaf-dev] # # in urls

Peter Williams home_pw at msn.com
Tue Sep 29 17:07:17 CEST 2009


My original desire was to exploit the "indirection" power of fragments, so allowing the semantic scheme for a particular class of "https-powered" webids to be defined on the fly by media type - that which indicates the returned rdf file's marshalling scheme. 

That is: based on the "metadata signals" in the media type (and the media's attributes), when the UA resolves the webid it would now know HOW to parse the webid's fragment. The parsed element would then be handled, accordingly - allowing the https protocol handlers in the UA to move beyond fixed trust semantics. For example: Media type text/rdf+xml;https=wot might induce the UA to ensure that the hash on the webid's fragment aligns with the SSL server's EE cert AND ensure that the public key must be assured using the w3c wot (using the graph walking features of the sparql server). There would then be many, many opportunities for "applied-https" design here, since one has an "infrastucture of indirection".

Now, I could not care less about the webid fragment tag's internal syntax (merely wanting its indirection power to power the next generation of https handlers). In pointing out the Microsoft example of multi-element format tags , I wanted to simply point on how expressive it can be (when augmenting XML and HTML elements with behavior scripts such as stateful data containers, for example).

But, perhaps focusing overly on the fragment is NOT the right thing to do; especially if folks suddenly turn willing to use the cert itself more "richly". It's interesting to me that W3c types have suddenly embraced ASN.1 and the rich, extension model of certs - after 15+ years of pouting about all that ASN.1 stuff inherited from 1980s ISO/CCITT committees!

What even I cannot believe one wants now (and I am Mr Populiser of certs for 20+ years now) is that one wants TODAY's web-centric trust graph to be based on certs. Though cert chains can certainly be discovered and resolved (DISA directories regularly reslve cert graphs in a space of 4 million nodes), we CAN do better than putting base64 encoded asn.1 blobs in foaf files, I feel. That is how we thought of the problem in 1991: directory-based searching, and coordinating of naming and cert discovery hints. Today, we all know we have signed graphs that can do much better than that (as showcased in the OASIS XDI pilots, if nowhere else).

But, for foaf+ssl purposes, I feel that

1. Leveraging ASN.1 client certs and the ssl layer's security services is the right thing to do; for now. 

2. Later, we can replace 1988 era client certs expressed in ASN.1 with signed-graphs (as TLS has removed the restriction that the TLS-era "client cert" message must bear objects in the ASN.1 cert type. Any object playing the ROLE of a client cert can be transferred in that field these days, once indicated)

3. It would be cute if the server's media type/fragment tag would tell the UA resolving a webid citing https what "trust scheme" the UA's https handler should apply (above and beyond enforcing the security services in its ssl protocol handler). But, this is a variant of a decade-long fanciful notion Ive had -  wanting folks to stop thinking of https key management and trust processing as some magical, forever fixed thing. Rather, it IS something me and YOU can change, so that trust semantics of the https layer are viewed as "programmable", "profilable", and not something forever tied to key management notions of the DNS and the military-centric internet.

4. I don’t much like the idea of having the SSL server serving your personal foaf file be citing the very same EE cert (or same public key) as is used in asserting the webid in the client cert. This intuitively offends my (military-trained) key management background, where we ALSO have to worry about subtle crypto compromises. But, better cert chaining and ephemeral certs can bridge the gap, here, I feel - without compromising the cipher suite.

5. the entity processing the inbound client cert assertion of the webid certainly needs to resolve that webid. Since I really no longer want to resolve the "cert" itself, I feel that it's entirely appropriate to treat the cert as a metadata container - for sha hashes and webids ( and anything else one wants to signal to the resolver's https handler). It's playing the "director" role I wanted for the media type + fragment then; which is fine!

-----Original Message-----
From: hjs at bblfish.net [mailto:hjs at bblfish.net] On Behalf Of Story Henry
Sent: Tuesday, September 29, 2009 1:29 AM
To: Toby Inkster
Cc: Peter Williams; foaf-dev of a Friend; foaf-protocols at lists.foaf-project.org
Subject: Re: [foaf-dev] # # in urls

(Peter Williams , this thread should be on the foaf-protocols mailing
list. foaf-dev is more for talk on the foaf ontology. You can subscribe
here: http://lists.foaf-project.org/pipermail/foaf-protocols/
)

On 28 September Peter Williams wrote to foaf-dev:
> If the typical webid is http://peter.com/foaf.rdf#me and said webid
> was to be placed in the self-signed cert, what Id still like to do
> is allow the self-signed cert to bear http://peter.com/
> foaf.rdf#me#<hash> instead. The idea here is that the foaf+ssl
> protocol engine on the consumer side would only process the SSL
> client cert IF it is able to resolve a foaf file matching the hash.

> My understanding of theory (which may of course differ from
> practice) is that folks receiving  #me#<hash> tag should be able to
> treat it opaquely during inferencing - assuming the engine has no
> knowedge of the resource authority's meaning (of subcomponent). And,
> if the foaf file rules are appropriate designed, #me#<hash> may be
> specified to be equivalent to #me, of course (as may #me#<hash2>).

The problem with the above is that you would be putting semantics into
the hash tag that it should not have. And as Toby pointed out, you'd
have a very big problem calculating the hash of the representation, as
the hash URL would have to appear in that representation you were
trying to calculate the hash for. So you'd have to know the answer of
your hash before you calculated it.

On 29 Sep 2009, at 00:16, Toby Inkster replied with the following
improvement:

> So, assuming that <http://tobyinkster.co.uk/> had a SHA1 hash
> of "c70e773f954a522ba14a1ee32c192eea20f096e1", then the
> subjectAltName
> of my X.509 certificate might be:
>
>       subjectAltName: email:mail at tobyinkster.co.uk,
>           URI:http://tobyinkster.co.uk/#i,
>           URI:urn:sha1:c70e773f954a522ba14a1ee32c192eea20f096e1#i,
>           registeredID:1.3.6.1.4.1.33926.0.0
>
> The main problem of that, as I see it, is determining what is the SHA1
> hash of <http://tobyinkster.co.uk/>? Requesting that URL with an
> HTTP
> Accept header of "text/html" will result in a different stream of
> bytes
> than if you request it as "text/turtle".
>
> And if you change your FOAF file frequently?

yes Toby I agree, both of those are terminal criticisms of that idea.
If everytime you changed your foaf you'd have to change your hash, and
so change your cert, a lot of uses of foaf+ssl would be lost.

Also it is not quite clear why the <urn:sha1:...> URI is an
alternative name of the subject. It is the signature of a
representation, but an alternative name?

Finally I am not sure what this brings you in added security wise. The
foaf file should be served up using https.

> No, there must be a better way of doing this. Here's an idea, and I
> don't know if it would work, but how about you serve your FOAF file up
> using HTTPS instead of plain old HTTP?

Indeed that is pretty much required in foaf+ssl for security. One can
do without, but one should clearly reduce one's trust in the resulting
identification.

> (If you've got an "http://"
> WebID, then you can simply throw an HTTP redirect.) That way, the
> transaction between the server that hosts your FOAF, and any clients
> that request it, is transmitted over HTTPS - therefore *signed* by the
> server's key. The server's key can then be signed by your personal X.
> 509
> key.

This would be very interesting, but would not work too well with
current browsers.
Imagine that the foaf file is in html with rdfa markup. Then clearly
you would wish for that file to also be read by a browser. But if the
html is served over https with an X509 signed by personal certificate,
that is not itself signed by some CA, then the current browsers will
throw a security exception with some very dire warning messages.

Also what the X509 serving the rdfa is doing is guaranteeing that the
server one is connected to is indeed the one wanted to connected to.
So part of the importance of CA certificates, is that they help verify
that one's DNS has not been poisoned. I don't think a CA signed by a
personal self signed key would do that.

> So, typical use case... you request protected resource A with your X.
> 509
> key. The server hosting A checks your X.509 key's subjectAltName and
> finds your WebID. It does a request for this WebID and gets a 303
> redirect to an HTTPS address we'll call B.

((If the WebId is https, then of course there is no need for this
redirect.))

  So if you are concerned about security, you are already in a mess at
this point. Because the redirect could be changed by a man in the
middle attack - or indeed the DNS could have been poisoned, and the
server B be a fake server.

> The server hosting A requests B and gets back a FOAF file as
> response; it checks B's server
> certificate is signed by the same certificate that originally
> requested
> A;

So this requires well established CA's to sign personal certificates,
or for them to give domain owners certificates that can then sign
other certificates.

For this to work correctly, one would need a rule to say that EVERY
file written by X, should be signed by X's certificate. Otherwise it
is not clear when this rule should be applied, ie when one should do
this test, and when one should reject the response.

Why should it be applied here for example?

Ok, so perhaps that would just be an extra security step, for those
who want it.


> and it checks that the exponent and modulus in the FOAF data match
> the X.509 certificate that originally requested A.
>
> It should work; and it should be compatible with the existing FOAF+SSL
> model, just providing an extra level of security for those who want
> it.

Yes, this is an interesting idea to work on. With the caveat that it
has some issues that would need to be worked out, as I mention above.


Henry




More information about the foaf-protocols mailing list