[foaf-protocols] First WebID Teleconference minutes (July 27th 2010)

Bruno Harbulot Bruno.Harbulot at manchester.ac.uk
Mon Aug 2 13:36:54 CEST 2010

Hi all,

Sorry, I was unable to attend this first teleconference as I was 
travelling at the time. I've managed to listened to the audio recording.

Here are a few comments:

On 30/07/10 21:05, Stéphane Corlosquet wrote:
> 2) Discuss short-term (spec) and long-term goals (PaySwarm)
> ===========================================================
> Doug: the faster you can move, the better. I support the idea of
> iterating quickly. We can make an interest group around this, but might
> not be more effective. I volunteer to bring WebID toward a W3C rec
> track, though we don't need to discuss W3C matters now.
> Manu: everyone would be happy if WebID was brought on a W3C rec track,
> that's the goal for everyone.
> Doug: I would not be surprised if W3C started to have an interest in
> identity online (OpenID, FOAF), that's another way "in" to get WebID on
> W3C rec track.

If I remember the audio recording well, someone asked whether some of us 
were W3C members. As far as I'm aware, the University of Manchester is a 
W3C member, so I could ask our local representative for more details.

> Henry: 2 layers: (1) core WebID, and (2) there is another layer above
> which ties WebID with OpenID: openid4me (could be standardized). Other
> projects to tie WebID into the other identity schemes out there.
> Manu: re. long term goals. We know WebID as a universal login mechanism
> and we have some use cases to demonstrate it, but we might need
> something else to complete the story, to show people why it's useful to
> have a universal identity along with info which you can associate with,
> and how you can relate it to the rest of your activities online, e.g.
> list you name, email, picture, plus other services like twitter as your
> microblogging, facebook as your social network, payswarm as your
> transaction service, etc.
> You can use WebID to integrate with openID, but also use WebID in the
> OpenId protocol for example for OpenID providers to verify the HMAC
> signature: the key being not only can you use WebID as an OpenID, but it
> can be part of the core OpenID protocol as well. See WebID not only as
> universal ID, but see it to help other services like twitter and
> facebook. ACTION: create an "Integrating WebID with other identities"
> document.
> Reto: Conflict of interest between integrating all the other services
> with WebID and keeping the WebID spec small and beautiful. There is too
> much about OpenID in the current spec, could be moved at the bottom or
> in an appendix.
> Henry: or move it to a different document.
> Stephane: it's still good to keep some aspects of OpenID in the main
> spec as a mean to explain what WebID is in comparison to OpenID, but not
> to the extend as it is now in the related to openID section, which could
> be moved to another document.
> Henry: the more core is technology agnostic, the longer it will be
> valid. Specifics or comparison with other technologies should be placed
> into separate documents. It also allows us to save time and rapidly get
> the core spec published, and spend more time later on these other matters.
> Manu: what about putting this into a primer?
> Henry: makes sense
> Reto: primer should be something else. primer should explain
> implementers how to provide a WebID and authenticate with WebID but the
> OpenID comparison should live elsewhere.

I think it's a good idea to put as little (or nothing) about OpenID in 
the core spec, but have it in a separate document instead.

> Manu: several documents: core spec, use cases and requirements, primer,
> comparison with OpenID. volunteers needed.

I'm also half-tempted to split the document between the verification 
part and the interaction with TLS, in the same way as the PKIX 
specifications and the TLS specifications are kept separate.
This effectively allows for the certificate verification and binding to 
an identity to be independent of what it's used for. WebID certificates 
could be used for e-mail via S/MIME in principle.

This being said, it's probably more practical to put this in the same 
document. We might want to say that the sections about TLS are only 
applicable when it's used with TLS, but that the rest could be used in 
other contexts.

> 3) Requiring only one serialization (RDFa) vs. two (RDFa + RDF/XML)
> ===================================================================
> Manu: issue is if there are 2 serialization formats, the risk is it may
> be more difficult for people to implement WebID.

Was this really the issue? It depends on the side. I found this 
discussion quite confusing, as it lacked the context w.r.t. which side 
of the services we were talking about (except one of Reto's points, 
unless I've missed others).

Firstly, the points I was making in previous threads on the topic was 
about mandating both RDF/XML and RDFa on the verification side, leaving 
more freedom for the publisher side (although it's its problem if it 
doesn't publish in some format that the verification side can't process 
its format if it's not one of the mandatory ones).

Secondly, when talking about automatic transformations, we also need to 
say which side is expected to make the transformation.

> Henry: that's why we need to phrase it in terms of automatic
> transformability of a serialization into an RDF model.

> Henry: I agree, that's why the whole thing is defined semantically. The
> core requirement we need is simply an automatic way to transform any
> format into RDF.
> Doug: what is the criteria to be transformable in RDF? any format is
> good enough and could technically be transformed into RDF.
> Henry: machine can find out the mimetype, lookup a transformation
> mechanism to get it into RDF triples which happens automatically without
> requiring any human interaction. GRDDL and XSPARQL could help building
> bridges between other formats and RDF. Any XML format could be usable.
> We don't want it to be too complicated (so start with two first). Add a
> paragraph on extensibility and tell our readers how they can integrate
> their format with WebID. Within the next year, we will see of more
> formats come up and we'll decide if we should include them.

There are two ways of seeing this:

(a) The GRDDL magic happens on the publisher side (the server). That's 
similar to doing server-side XSLT for transforming XML into HTML for 
The end-result is that the server will produce an RDF/XML (or other RDF 
format) out of this, but this is completely transparent for the client. 
However the publisher produces this representation (via GRDDL or not) is 
not its problem.

(b) The GRDDL magic happens on the verification side (the client). 
That's where things get more complicated. I think expecting the verifier 
to be able to accept any XML and do the right transformation 
automatically is unrealistic and introduces a lot of complexity, not 
only in terms of implementation, but also in terms of security issues 
(where to you get the GRDDL transform from and how can you trust it?).

> Manu: We can phrase it as follows: right now we support RDFa and RDF/XML
> but any format which can be transformed into RDF triples can be
> integrated with WebID. The formats which will work out of the box are
> RDFa, RDF/XML or other RDF serializations.

This is kind of what I had suggested a couple of weeks ago:

> A Verification Agent MUST be able to process documents in RDF/XML
> A server responding to a WebID Profile request MUST be able to return a
> representation in RDF/XML (using media type application/rdf+xml) or
> XHTML+RDFa (using either media type text/html or media type
> application/xhtml+xml). In addition, either parties may support any
> other RDF format via HTTP content-type negotiation.

The new version makes things worse, I think:

> A Verification Agent must be able to process documents in RDF/XML
> [RDF-SYNTAX-GRAMMAR] and XHTML+RDFa [XHTML-RDFA]. A server responding
> to a WebID Profile request should support HTTP content negotiation.
> The server must return a representation in RDF/XML for media type
> application/rdf+xml. The server must return a representation in
> XHTML+RDFa for media type text/html or media type
> application/xhtml+xml. Verification Agents and Identification Agents
> may support any other RDF format via HTTP content negotiation.

Mandating or recommending content-type negotiation could make things 
more difficult for the publisher side.
Mandating the HTML returned to have some RDFa is also a bad thing, I 
think. The verification agent could very well request 
application/rdf+xml first and then text/html in order or preference. 
Then, a server supporting content-type negotiation could return RDF/XML 
for that and some plain HTML to a browser.

> Doug: caution against talking about it as RDF in order to keep
> communities who don't like the RDF solutions. Define the semantics as
> the normative requirement, and keep things like RDFa as a possible
> serializations. Maybe key value pairs would work.
> Henry: key value pairs would work if they are linked to a mimetype.
> Let's see how we can phrase that. Use XSPARQL to transform a well known
> format out there into RDF.
> Manu volunteers to do a JSON-LD and Stephane volunteers to do an example
> with XSPARQL.

The problem we're talking about is still about making sure that the 
verification agent has a chance to be able to verify the WebID by 
getting the WebID profile document.
I don't mind people who implement JSON verifiers in their verification 
agents, it would make sense indeed, but I think there should be a core 
spec to be sure that things will work.

Best wishes,


More information about the foaf-protocols mailing list