[foaf-protocols] new sequence diagram

Henry Story henry.story at bblfish.net
Tue Nov 22 19:46:31 CET 2011


On 22 Nov 2011, at 18:19, Peter Williams wrote:

>  
>  
> I went looking for my implementation "platform" - i.e. code 95% done, to which I add the webid validation protocol. I found it in the WIF SDK (a sample from Microsoft, with installers and configuration scripts that configure the "pipeline" platform "professionally'), and its similar to what I see others doing when showcasing then what the webid 1.0 spec adds. Its not just another php script on apache, then; but does something that show what Windows can and cannot do, in the hands of third parties. IN the hands of Microsoft engineers, obviously lot's more is possible.
>  
> It's a simple little ajax page, whose javascript access a second page on the same site to obtain an HTML fragment, which is duely rendered in the DOM by the browser. An existing login interceptor guards that process, redirecting to an IDP (much like foafssl.org) to get an assertion back that authorizes the minting of a web session, that in turn becomes a cookie-based ticket that can be posted-back to re-authorize the web session in a stateless app. At the IDP site, the login page will require a client cert over https, and perform the webid validation protocol. It will send back to the requestor the results of its finding, much as foafssl IDP used to assert back to foaf.me. 

that seems ok. The foaf+ssl like site is then the one implementing the webid protocol. The other site is using another protocol.

>  
> Unless someone can find a basis for objection, I dont believe that violates any of the webid assumptions, about the kind of design that one SHOULD be using. Each of the other 10+ sample seem to violate some or other design precept (by using CAs, using AD-managed trust relations, being all about SOAP, using JSON tokens in the HTTP auth header, using chains of asserting parties, and other violations of principle).

WebId does not claim to be the only protocol in town.

>  
> Now, all I need is a .net library that can build a graph given an RDFa document source to a URI. The spec doesnt actually say that suppprting RDFa is mandatory, but I think the webid community is moving that way. This is the missing piece now on windows. The turtle and RDF/XML cases were easy (I just reused the 50 lines of script I did 2+ years ago, and the supporting library). The issue now is RDFa.
> 


There are XSLTs to transform html into rdf I think, for platforms that have not yet implemented rdfa.


> All the issues I used to struggle with on windows have gone away. Im not sure I ever succeeded, but they forced me to built my own listener, own thread engine etc. They were all about webid (then foaf+ssl) rejigging https/SSL (particularly the zero DNs in the CA list, from the server). None of these are in the modern spec, however. Thus, I believe that the bog-standard windows platform is suitable for a webid demo. As other implementation code has shown, a bit of php script supported by the Graphite library is really all one needs to deliver. In my case, subsitute PHP script with ASP.NET script, and find an equivalent .net library to Graphite.
> 
>  
> Date: Tue, 22 Nov 2011 11:32:47 -0500
> From: kidehen at openlinksw.com
> To: foaf-protocols at lists.foaf-project.org
> Subject: Re: [foaf-protocols] new sequence diagram
> 
> On 11/22/11 10:54 AM, Peter Williams wrote:
> 
> WANT AND NEED for webid, and linked data
>  
> (Delete now if it even LOOKS too long for you)
>  
> Folks may know (but here folks probably do not) that windows is a developer platform, as well as a consumer platform. Thus, anyone can build their own web server. its a programmers friend, enabling one to design. Folks in the bottom half of the class are MOST welcome. 
>  
> The idea is that anyone completing a first year technical degree can fire up the  visual studio IDE and launch a command line window that acts as an https listener. Typically, it exposes a web service endpoint, perhaps returning some JSON. Its all pretty trivial. Making that command line window into a windows service is also easy. There are plenty of builder tools out there to make intranet-graded windows services (like deamons, in the old system 7 unix model), listening on an HTTPS endpoint, on some port, bound to some cert, and bound to a policy for evaluating client certs in terms of trust AND content. The latter are not programmed, but are registered with the windows kernel. One registers an endpoint, one sets the parameters, and the HTTP driver in the kernel enforces a cryptographic-grade security policy, where the rings of the CPU are used to deliver assurance (including the assurance that a data spying policy is in place, if desired). Data is then released to user processes. 
>  
> A common use of this for user desktop machines is the (SOAP) webservice that exposes management information over https to enterprise system management systems, implementing and enforcing management practices such as ITIL (the uk quality standard). Its is web stanbdard, but its the the intranet form web that does not concern this group (I believe intranet and enterprise is all out of scope; only the public web is in scope for webid).
> 
> Your assumptions are incorrect re. scope. WebID is aimed at private (intranet), public (internet), hybrid (extranet) networks. If this misconception exists in a narrative or spec, it needs to be rectified.
> 
> 
>  
> Alternatively, one can run a web server such as IIS, which is designed for high volume, public exposure, and the designers assumes it will be attacked by web culture (viciously), including its criminal elements that web culture fosters. IIS is designed for use in data centers, linked to server farms, load balancers, switch blocks, vmware clusters, SANs, and professional data center engineering generally.
> 
> Yes.
> 
>  
> In the windows world, with command line processes, windows services, or IIS web applications, one binds client cert requirements to the registered endpoint. The kernel HTTP driver then duely enforces the requirement. Two requirements are well specified. One has the ability to require and (merely) request client certs. As the URI in the HTTP request is evaluated, it is matched with a registered endpoint (pattern). Two common patterns are distinguished: an / endpoint (i.e. website endpoint) to which client cert requirement are attached, and a longer path endpoint (/vdir/ typically). In either case, the administrator (or installer script of a program) sets the requirement, to require or merely request a client cert. Since a web app may expose several paths, with different policies, different styles of https can be projected. One common style suits how the Mozilla-class browser expects to leverage https. Another, how AJAX controls typically consume data using JSON.
>  
> The windows world is wider than the above, particularly in the last 5 years. (The above represents 10-15 year old thinking, from the early web.) More recently, API centric https has come to the fore, in the so-called windows communication framework (foundation?) libraries, recently augmented with websso behaviours. Though the concept of endpoint registration has not gone away, one can one insert certificate-related interceptors into the pipeline, now, as a thid party. Though IIS is one tuning of the pipeline (to suit the needs of that web server), you can build your own. An hosted entity hosting in IIS can even vary how the IIS-mediated pipeline works, for that particular entity (and no I dont mean reboot the process with an additional ISAPI/NSAPI).
> 
> Yes, all standard stuff for Windows platform. 
> 
>  
> For example, if you use IIS to host the ADFS v2 web application (ADFS is an IDP accept https client certs, that mints SIGNED saml assertions in response), the tuning of the pipline is not even the 15-year old web app model I described above - even though it is "hosted" by the IIS platform (so it suitable for internet endpoints). The pipeline has been setup so that only certain client certs will be recognized, by the "validation agent" - to use a webid term. is windows the agent? no. is IIS the agent? no. Is the ADFS pipeline config the agent? yes. Is the ADFS pipeline behaviour the same as the next door web app running in a web garden process pool by default? no.
>  
> in the ADFS pipline config, the windows kernel http driver and the IIS platform and the interceptors in the per-app pipeline enforce particular policies. First, the cert trust list and the population of CAs DN in SSL messages is controlled by that pipeline. Only those CAs that are bound up with the windows host's ActiveDirectory forests/domains (and its inter-domain, cross-forest trust management concept) are validatable. When the client cert is evaluated, its must have content and extensions of a certain types. Together, the bundinling of all this implements (informally) what is called the world of a WindowsIdentity. its exactly what it says: an identity for the windows operating system, and is rather similar to java (which has a similar java-world notion).
>  
> So, in the windows programming world, there is no such thing as a "web server", like CERN https or apache, or tomcat. There are just platforms, out of which folks build web endpoints that exploit the HTTP driver in the kernel  - of various kinds. This year, the pipeline metaphor is in vogue.
> 
> I once took a lot of heat from stating that Web 2.0 was really about executable endpoints. Basically, the pipeline you describe. This was at a time when Tim O'Reilly was hell bent on owning the term :-)  
> 
>  
> logically, a professional windows system architect would be using  all this "assured" apparatus, when building a security policy for webid. But, you dont have to. You can alternatively run a php interpreter in a user process, expose some endpoint using a socket API (several flavors to pick from),  and run a wordpress instance. Which all reminds me a litle of my first computer, which had no OS, just a bios, and bootloader, and an fourth interpreter.
> 
> Conclusion: Windows is a good HTTP player. Yes it is. Ditto Mac OS X, iOS5, Android etc.. WebID can provide value in all of these realms. Folks can choose the programming language (high or low level) or none at all etc..
> 
> There is nothing about WebID that's platform specific. There's nothing about URIs that's platform specific either :-)
> 
> 
> Kingsley 
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
> From: home_pw at msn.com
> Date: Mon, 21 Nov 2011 17:37:42 -0800
> To: home_pw at msn.com
> CC: henry.story at bblfish.net; public-xg-webid at w3.org; foaf-protocols at lists.foaf-project.org
> Subject: Re: [foaf-protocols] new sequence diagram
> 
> Suggestion:
> 
> Add text to spec on the topic that servers must/should support a put operation whose request data is split over 2 or more completed runs of the ssl handshake protocol, on one tcp connection.
> 
> This is testable with the unit test suite. First one can tell if the must is delivered (assuming its must) and then one can connect over https and begin a put orchestrate delivery of a client cert in a second handshake (with unexpired cert day), and then have the client request a new (third) handshake, which presents an expired cert whose webid does not dereference. The test suite can test whether the doc is put, or not.
> 
> Out of interest, which implementations do this, and do you put the doc or not?
> 
> Is the rule that no expired client cert  or invalid webid must have been encountered/handled on the tcp connection, before the last byte of the put request is handled?
> 
> Does a positive cert/webid received on the tcp connection cancel a negative cert/webid?
> 
> Do all the bytes of the put get placed in the doc? - even those received for a session (section) that failed mutual auth? Or ate they all good?
> 
> Sent from my iPhone
> 
> On Nov 21, 2011, at 3:38 PM, "Peter Williams" <home_pw at msn.com> wrote:
> 
> 
> 
> In iis I can make the site require https and a client cert, which means no uri is processed till the cert is received. There is no http (vs https) binding, and the https binding attaches to the first / component of the uri path.
> 
> Or I can bind the client cert requirement to a longer path element of the uri. If an event handler at the site level is fired upon handling a uri (received over ssl, typically) and redirects to /guarded/login.aspx, the path component of /guarded/ when next presented to the server https engine will induce a client cert seeking handshake, should the ssl session not already be storing the client cert. This is the classical double handshake. 
> 
> I'm going to do the latter, since I cannot determine from the spec that it's non conforming. It's bog standard https app building, as well. 
> 
> I have no intention of enabling the use of https you suggest (in my code): a single http put could span multiple handshakes. I will only offer classical web app behavior, in which the cert opens the admittance guard that otherwise blocks the inbound layer 7 port attached to the uri path component in the registration of the ssl endpoint with the windows kernel. Others can offer their own assurances, as they see fit. Nothing in the spec gives any guidance on this...( in any comprehensible manner, anyways)
> 
> server variable is the old name for the cgi-like information given to a cgi-class script. The cars are not those listed in the cgi spec (but play similar role). For example, server vars (distinct from chi bars, subtly) might indicate the sessionids, the use of American cipher suites limited to 40 bit etc, the elements of the subject dn, or the San dn, etc
> 
> Sent from my iPhone
> 
> On Nov 21, 2011, at 2:43 PM, "Henry Story" <henry.story at bblfish.net> wrote:
> 
> I mean that the connection is renegotiated in the way RFC 5746 discusses, when it solves the renegotiation issue that came up
> a few years ago.
> 
>  http://www.ietf.org/rfc/rfc5746.txt
> 
> This fix has now been widely implemented - and there is even a note for example in Java 6 and Java 7 on this 
> http://download.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#tlsRenegotiation
> 
> 
> 
> 
> On 21 Nov 2011, at 04:42, Peter Williams wrote:
> 
> Be aware that you are using loaded terms (to one who is expert on ssl and https).
> 
> For there to be (over https) an (http) application exchange and only then a request for an https client cert implies a double handshake (typically). The first delivers the app data, which (probably by uri component) induces the server to ask the client to perform a second handshake. In the course of that the server may demand a client cert (should no client cert have been previously communicated, in unsolicited manner.)
> 
> yes.
> 
> a second app data request is then delivered through the server socket,
> 
> I don't think there needs to be a second request at the APP level. The interesting thing about certificate renegotiation as I see it. is that
> this happens during the same TLS connection, and whilst the application is processing data. So the server could make a bit PUT of a video streaming a large amount of data to the server as the guard is deciding wether the client needs access. The renegotiation allows the server as I understand to not interrupt the request at the application layer.
> 
> at which point the server variables for both client cert and ssl cipher suite information will typically be attached to the typical web servers http request (cgi/servlet context).
> 
> no sure what you mean by server variables.
> 
> 
> This is essentially what foafssl.me does. As I recall, the foafssl idp site acted similarly. It's not obvious from the script code of webauthid site if it also does this.
> 
> not sure what webauthid is. 
> 
> On the other hand I have got this to work using the netty engine in Scala with the following code (I showed a less elegant version previously). Essentially at line 192 the server does not have the client certificate, yet at 198 he does. (you can click on the line numbers to get to the code)
> 
>    188     
>    189     val sslh = r.underlying.context.getPipeline.get(classOf[SslHandler])
>    190     
>    191     trySome(sslh.getEngine.getSession.getPeerCertificates.toIndexedSeq) orElse {
>    192       sslh.setEnableRenegotiation(true)
>    193       r match {
>    194         case UserAgent(agent) if needAuth(agent) => sslh.getEngine.setNeedClientAuth(true)
>    195         case _ => sslh.getEngine.setWantClientAuth(true)  
>    196       }
>    197       val future = sslh.handshake()
>    198       future.await(30000) //that's certainly way too long.
>    199       if (future.isDone && future.isSuccess)
>    200         trySome(sslh.getEngine.getSession.getPeerCertificates.toIndexedSeq)
>    201       else
>    202         None
>    203     }
>    204 
> 
> 
> 
> Be clear. The the first app data is to be delivered to layer 7 code (ie some php script) which decides to induce the handshake that demands a mutual auth ssl session, this is subtly different to the following. Alternatively, the php script only ever receives 1 indication, which always has a client cert in its context.
> 
> Make sure it's clear which pattern must be implemented.
> 
> If its the first pattern, can there be multiple http Tewa/resp over https, before one "upgrades" this session to a mutual auth session?
> 
> If its not obvious why there is any need to distinguish these, just delete the mail.
> 
> Unless I am mistaken I don't think we need to go into this. I will try to go a little bit more into the RFC 5746 in the explanation in the new version of the spec I am developing, because it is important to point out that this issue has been solved. But as much as possible I want to rely on the TLS standards as they are. 
> 
> http://bblfish.net/tmp/2011/11/21/index-respec.html#authentication-sequence
> 
> Perhaps we need to say something more.
> 
> Here is a question for you: where are the WANT and NEED methods of requesting a client certificate defined? Every server seems to use those concepts, ...
> 
> Henry
> 
> 
> 
> 
> On Nov 20, 2011, at 5:05 AM, "Henry Story" <henry.story at bblfish.net> wrote:
> 
> Peter Williams had some criticism about the sequence diagram, which it is true whilst being simple is perhaps merging too many things together.
> 
> So I propose the following much more precise diagram. I plan to also create a state diagram that would show interactions 
> more clearly. Here we are looking at a request that succeeds.
> 
> 1. We set up a TLS session. The server authenticates.
> 2. The application layer protocol starts. It passes a guard which can look at the application layer protocol metadata
>    and request the client certificate if needed. (the guard can have access to ACL information to make this decision)
> 3. the Guard decides 
>     a. client authentication is needed  (it's not available in cache) and asks the TLS layer to do that
>     b. the TLS layer sends a client authentication request
>     c. the client selects a certificate
>     d. the TLS agent verifies only that the public sent in the certificate can decrypt the encoded token
>        ( we need to find the technical jargon for this)
>     e. if it does the guard ends up with the client certificate
>  
> 
> <WebIDSequence-friendly.jpeg>
> 
> 4 . The Guard needs to verify the WebID claims in the certificate, so it sends those to the WebID verifier that follows the
>    well known procedure, either going through a cache or fetching directly the information on the web (5)
> 
> 6. given the identities the guard can decide whether the user with that identity has access to the resource requested by considering
> its ACLs and the graph of trusted information. (out of scope of detailed study here)
> 
> 7. the resource is given access to and the server can send the application layer response to the client
> 
> -----
> 
> The good thing in this diagram is that
>  
>  1. we can make clear that the TLS agent can be bog standard - it just needs to not throw an exception if it does not recognise the issuer.
>  2. The guard is working at the application layer, and can communicate with the underlying TLS layer.
>  3. we don't need to specify what the protocol of the request is - but we can give examples of HTTP requests
>  4. the above makes clear how we can get around any browser issues, and how we can get rid of the most problematic user interface problems: namely the automatic request of the client certificate
>  
> Henry
> 
> 
> 
> Social Web Architect
> http://bblfish.net/
> 
> _______________________________________________
> foaf-protocols mailing list
> foaf-protocols at lists.foaf-project.org
> http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
> _______________________________________________
> foaf-protocols mailing list
> foaf-protocols at lists.foaf-project.org
> http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
> 
> Social Web Architect
> http://bblfish.net/
> 
> 
> 
> _______________________________________________
> foaf-protocols mailing list
> foaf-protocols at lists.foaf-project.org
> http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
> 
> 
> -- 
> 
> Regards,
> 
> Kingsley Idehen	      
> President & CEO 
> OpenLink Software     
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
> 
> 
> 
> 
> 
> _______________________________________________ foaf-protocols mailing list foaf-protocols at lists.foaf-project.org http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
> l t
> _______________________________________________
> foaf-protocols mailing list
> foaf-protocols at lists.foaf-project.org
> http://lists.foaf-project.org/mailman/listinfo/foaf-protocols

Social Web Architect
http://bblfish.net/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.foaf-project.org/pipermail/foaf-protocols/attachments/20111122/84cdc816/attachment-0001.htm 


More information about the foaf-protocols mailing list