[foaf-protocols] WebID - mandated syntax or market solution? was WebID Incubator Charter draft
kidehen at openlinksw.com
Sat Dec 18 22:31:43 CET 2010
On 12/18/10 3:46 PM, peter williams wrote:
> Folks were discussing ASN.1 earlier, and its ability of the language to notate EAVs (with each ASN.1 notation design being embodied in the macros, and each having different benefits to the specifier or code-writer). It was a self-extending notation system that is, able to specify new syntaxes for both types and values and define a notation for each. But this is only part of the story.
> In high assurance crypto, the last thing you want is the variability from the likes of the Above. You want code to do X, and provably nothing other than X; not more, not less, just X. And, you want the proof to be "implicitely assoctiated" with the value. The assurance is the proof, but the proofs don’t have to be axiomatic in basis.
> And this leads to a story.
> BBN is famous for running with a DARPA contract that made early internet routers. It was (and may be still is) an NSA contractor for crypto boxes for various functions. Its also famous for then facilitating crypto tunnels supporting late 1980s-era IP VPN overlays on the frame-relay/DDNX.25 switched nets. Anyways, the crypto box supported early TAC (read cisco) routers communicating over local serial lines, driven much like the ports came out of your first IBM-PC based on a simple UARTs. Trouble is, all the messages were send to the crypto boundary of the crypto box were notated in (early, non-compliable) ASN.1, as a shorthand for the nested EAV byte strings. Regardless of ASN.1 formal power, one had to "confine" the notation to characterize how the values were NOT to be of variable length.
> Now, given that EAV model, for one of the value-encoding schemes, the tag concept (read URN) was a number, that had various compact or less compact forms that might make variable-length bit string results. Given the high-assurance requirement, folks used the compact form, ensuring that the structure could be represented by C code, where the getter/setter code was generated by a high-assurance compiler (or unix compiler, for the rest of us using the export-grade box). The serialization was not defined by rules, but by the memory architecture of the CPU, as one looked at the value from the (von-Neumann) CPU's perspective. Between compiler tricks and CPU tricks, one could inject certain "assurance features" into what are simple values in their computer science treatment. (You see some of this applied to Intel TPM methods today for commodity PCs using media and firmware, which link compiler theory with "cpu-downloadable" micro-codes re-defining the meaning of CPU instructions).
> Anyways, it's possible to take special cases of the compactness rules and ensure the tags always fit in fixed fields - contrary to the normal mantra. To this, add compiler theory with defined outputs that then has [a suitable] CPU work a particular way when doing getting/setting, where the CPU instructions used by fetch/execute engine can detect when they are being "driven" appropriately by trusted code (or not) from the particular compiler. If so..you get "constructed" values (EAVs) whose authenticity can be implicitly verified. There is an implicit signature, that is.
> Now, this is all somewhat different to the usual arguments, limited to CS and logic.
> I'm hoping that webids characterized in the XG world can at least *be defined to* include techniques such as the above, when they are defined by a particular webid protocol, and thus W3C is augmenting the possibilities of the trustworthy web beyond the [sem]web architecture as its conceived today. Not all authentication is a simplistic PGP hash and [RSA] signature!
> This would allow the likes of Intel to get behind webids... and W3C activities resulting from the XG effort.
Another contribution, while on the subject of history and the real
-- history of the Internet lecture .
> -----Original Message-----
> From: foaf-protocols-bounces at lists.foaf-project.org [mailto:foaf-protocols-bounces at lists.foaf-project.org] On Behalf Of Jirí Procházka
> Sent: Saturday, December 18, 2010 10:27 AM
> To: foaf-protocols at lists.foaf-project.org
> Subject: Re: [foaf-protocols] WebID - mandated syntax or market solution? was WebID Incubator Charter draft
> On 12/18/2010 06:54 PM, Kingsley Idehen wrote:
>> On 12/17/10 6:53 PM, Jiří Procházka wrote:
>>> Sorry, but this reply makes me feel like I am talking to a wall. This
>>> is nothing new to me, what you say it basis for Linked Data which I
>>> am familiar with in detail for a couple of years, and you have been
>>> infusing most of your emails with it in one form or another. Lets
>>> just agree we know what we are talking about and get to the point:
>> Tell me how anything we do isn't about Linked Data, in reality.
>> There is no WebID without Linked Data. Period!
> Please don't put words in my mouth. I just said I understand Linked Data and how it works. In fact I believe it is very good thing and have been encouraging it's use for some time.
>>> Do you agree the "WebID" name could be used, besides as the protocol
>>> name, for something like a certificate (think
>>> http://validator.w3.org/docs/help.html#icon) which guarantees to an
>>> end user the software/service is usable in some way? (when it is not
>>> offline of course)
>> WebID is two things in one. An Identifier mechanism and an
>> Authentication protocol. It's a dual acronym.
>>> Do you agree there are higher demands on reliability of protocols
>>> then anything else?
>> Maybe, but at this juncture I don't feel your context. Hopefully I
>> will as I read on.
>>> Do you think if for example with DNS protocol while answering your
>>> query each participating nameserver could return "syntax not
>>> understood" error returning it to you as final answer would be good?
>>> Please name some protocols which do this (syntax conneg - they are
>>> just defined as logic - the model, like you wish).
>> DNS is not a good example when we talk about a protocol that based on
>> structured data packets. These data packets carry EAV content +
>> de-referenacable identifiers in the E&A slots, and optionally in the V slot.
>> This is something different. Its about self-describing data structures
>> and the ability to negotiate representation. The subject realm here is:
>> Distributed Data Objects. That isn't what DNS is about. The URI
>> abstraction leverages DNS for Name based Network Expanse with regards
>> to Identifier de-reference.
>>> Do you realise your reqirement "They have to grok the model and
>>> negotiate preferred structured data representations." puts much
>>> larger strain on potential adopters over whole time they attempt to
>>> support WebID the best, implementing new syntaxes over time as they
>>> gain popularity, yet still failing to be 100% interoperable, instead
>>> of my requirement of having to support at least one particular
>>> syntax, which they can implement once and forget about it, being 100%
>>> interoperable with valid WebIDs?
>> We are talking past each other because we have completely different
>> views of Data Centricity. You believe in Syntax while I believe in
>> Models where representation can be negotiated.
> No, I believe in negotiated representation, but I respect people who want lightweight implementations which support only the bare minimum and I am willing to support them as well. You on the other hand seem to believe solely in negotiated representation, regardless if its disadvantages, which of course exist as any purposeful thing in the universe.
>> BTW -- there was a time when we had to write network applications with
>> a thing called XDR , HTTP put that to rest via Content Negotiation
>> and clever abstraction. Unfortunately, HTTP is so "deceptively simple"
>> that folks think its a simple protocol devoid of sophistication. Quite
>> the contrary in reality.
> I am familiar with XDR and its usage in RPC.
>> WebID is an application of HTTP based LInked Data.
>> HTTP based Linked Data is a product of Web ubiquity.
>> Web ubiquity is a product of Internet ubiquity.
>> Internet ubiquity is a product of TCP/IP ubiquity.
>> Again, to insinuate that WebID and Linked Data are in anyway distinct
>> ultimately illustrates why we don't agree, at this point in time :-)
> Like I said, you are misunderstanding me, thinking I mean things I didn't say.
> I am just willing to support more people with different points of view who generally don't understand our Linked Data / Semantic Web perspective, sacrificing idealistic purity for a practical solution.
> To quote myself, I've summed it up:
>> Well I suppose best would be to make a separate (but linked) spec for
>> a thing called for example "WebID-I" as "WebID - Interoperable"
>> specifying such minimal required syntax to be compliant with it, for
>> the sole purpose of the badges. Hopefully enough people will be
>> reasonable and seeing its purpose and its separateness from the core
>> WebID spec and will save the holy war for some other event.
>> 1. http://en.wikipedia.org/wiki/External_Data_Representation - XDR
> foaf-protocols mailing list
> foaf-protocols at lists.foaf-project.org
More information about the foaf-protocols