[foaf-protocols] FOAF+SSL delegation: logging in to an HTTP server

Bruno Harbulot Bruno.Harbulot at manchester.ac.uk
Tue Apr 28 21:26:57 CEST 2009

I think we'll have to disagree on this one. Perhaps you could ask some 
security experts at Sun to give their point of view.

Story Henry wrote:
> In this discussion one should remember that security is not absolute: 
> this being a point that is
> part of the introductory class material for every course on security. 
> One can increase security in many
> ways. One way may be to sign people's foaf files. But one can get a lot 
> without needing to resort to that.

Yes, it depends on the application, but the security offered by the 
dereferencing way of verifying the key is fairly low (same as OpenID).

> On 28 Apr 2009, at 18:11, Bruno Harbulot wrote:
>> Secondly, in terms of auditing, it's hard to guarantee what was at the
>> URI when the authentication took place.
> No, that is easy: you just write a server that keeps that information. 
> With such a server
> the Service Provider can have a full log of all the transaction it did.

That's a lot of stuff to log. Even if you log the public key, how do you 
know who's telling the truth and whose key it was? That's only for 
auditing, a posteriori. You can't really assert who's saying what during 
the transaction and a posteriori, you can't really tell which key the 
user wanted to use at the time.

>> Suppose an attacker has found a way to hack into the hosting of
>> <https://romeo.example/#me> to replace the public key. He can put his
>> own public key, do a transaction with a service in Romeo's name, put
>> Romeo's legitimate public key back in place.
> yes. So the SP's server should keep track of what public key was used 
> and what public key
> he found published at the WebId. For many low value transactions that 
> logging should be fine.
> For larger transactions the service could use a trusted intermediary to 
> do the logging.

Remember that the risk is somewhat higher with a global ID: one who 
hijacks your ID this way could use services you've never heard of, 
without you being notified about it. Some misuses could sting with a 
long delay, thereby making it more difficult to prove your innocence.

>> Even if he eventually found out that someone had hacked into his FOAF 
>> file,
>> he wouldn't necessarily know what happened or when. Grounds for appeal 
>> would
>> be very slim. From the service point of view, all would have taken 
>> place as if
>> <https://romeo.example/#me> had done the operation.
> Yes. It if I break into your house, it will probably not be very 
> difficult for me to get your credit card number. Once in your house I 
> can then very easily do a lot of high value transactions over the phone. 
> People will be very confident that these transactions are indeed coming 
> from you, as I am answering the telephone.
>> It can then become a
>> struggle between the service and Romeo, who said what and who verified
>> the ID properly. Again, a potential legal nightmare. OpenID has the very
>> same problem. This is fine for some services, but not for all.
> Yes, amazingly enough, most of today's world wide commerce takes place 
> with credit number exchange
> over telephones. These can easily be remembered, can be copied, abused, 
> etc...

Credit card transactions with PIN number, signature and without either 
don't have the same legal value.

>> (This comment by someone called "bong" talks of the very same problem
>> for OpenID [*].)
> Bong is talking about something different.

It's the very same problem, at the user authentication level.

> He is saying that OpenId does not come with any notion of trust.
> [[
> OpenID has all the technical details worked out in terms of how to 
> exchange authentication information and credential information. But 
> OpenID has no technology that conveys trust. And trust is not something 
> that can be done technically. It’s a business agreement.
> ]]
> With FOAF+SSL we have the authentication piece.

Exactly, but you have to trust what you do the authentication with 
*before* you do the authentication (and keep a record of how that was 
done for auditing purposes). That trust comes with signed information 
(or information you trust by another mechanism), not by just 
dereferencing a URI you don't know anything about.

There's no point trying to figure out if a user connecting to your 
service with URI x is a friend of one of your friends (or whatever else 
the semantic web says about x, key info excepted) if you haven't made 
sure that this user was indeed x. Establishing the chain of trust and 
responsibilities that verifies the binding between the public key and 
URI x is crucial. (I'm not even talking here of evaluating how much you 
trust the various FOAF files, etc. from which you infer information 
about x in order to perform your authorisation decision.)

The dereferencing way of verifying the certificate offers almost the 
same security as OpenID (it gives you a little bit more in that you can 
log the public key, only useful a posteriori for auditing). It's fine, 
but not for all situations.
The real power of FOAF+SSL is that you can choose when you need what, 
but in most situation where proper (in particular, more legally binding) 
authentication is required, you'll need to do some FOAF signing or 

Best wishes,


More information about the foaf-protocols mailing list