[foaf-protocols] [foaf-dev] playing with reputations and non secret encryption, suited to the "socialization semantics" of foaf groups in an FOAF+SSL context.

peter williams home_pw at msn.com
Wed Dec 1 20:37:43 CET 2010

I've moved this thread here, where it belongs. It's not on the topic of
FOAF+SSL (a project that I've left to its own devices); but an application
of those FOAF+SSL- channels that assumes FOAF+SSL delivers a specifically
*commodity* security service for limiting access to a remote FOAF card.  By
analogy, I want to be the 802.1x committee that took SSL from the https
world, made it connectionless, leveraged new DES modes to build new SSL KDFs
that suit layer 2 security problems...all in an attempt to address a new but
related area (managed switch security, and "trusted ports").

Remembering how in the dotcom-era, venture-backed initiatives went searching
for crypto/math/security methods that perfected various micro-payment
schemes for billions of web-coins (and that many of those crypto-designers
went on to take research grants from government-programs seeking to
repurpose the innovation energy towards an internet-scale key
recovery/escrow infrastructure for billions of keys), I'm sensing an
opportunity to apply the semweb to the problem of massive replication of
semi-trustworthy triple stores. And, it's the semi-trustworthy feature that
draws me to FOAF, since evidently FOAF has design theory focused on serving
and leveraging the properties of specifically semi-trusted channels. It's
more important to be big - than entirely trusted. You can always improve ,

It's in the context of leveraging that assumed *space* of trustworthiness
that one learns to distinguish objective trust values from the more
subjective gauges of fidelity, recognizing that the math and methods applied
to trust may well be distinct from those applied to gauging fidelity. This
is the peter hypothesis (an irritant to GCHQ traditionalists probably), that
flies in the face of PKI/KMI and which denies the NSA/GCHQ principle that
ALL trust and fidelity in all secure communications systems must be tightly
bound to key management. Though I believe in that NSA principle that crypto
keying is the only thing worth trusting in secure communications (and said
trust in the key and key management devices engenders the scalable security
of the communications...), 20 years of force-feeding this doctrine into the
IETF and web has not resulted in social adoption. So, let's try the FOAF
doctrine, instead.

As always, when looking for paradigm shifts that are just ahead of the mass
market, search out the sex and hacking sub-cultures - and try to condense
how folks are satisfying their needs to socialize in those aspects of life
by applying new conceptions and the associated re-thinkings of commodity
networking technologies. Then, apply these to the commodity baseline. 

In the hacking area, it does seem from there is generational consolidation
going on - in which the anarchist-agenda techniques developed 10 -15 years
ago to be a simple irritant are hitting maturity - and their "designers" are
now finding means to apply them to non sex/hacking social problems - now
time itself has removed the pressure limits imposed by earlier generations
of bandwidth and storage. Folks seem to have hit on adoptable conceptions of
scalable reputation, confidence generation, and trust management for data
sets - that go beyond theories that invite folks to send +1 emails to others
on the cell, that use uneconomic, high assurance design/engineering
techniques, or that centralize point/likethis feedback functions in such as
ebay/paypal commerce sites. As is typical, there is a fair amount of
political thinking about new desirable social and governance structures
built in to the new conceptions... that may or may not stand the test of
time as the enabling technologies go through the mass-market wringer.
Typically, most of it falls by the wayside... once you throw some dollars

The above rationale is what lies behind the 2 technical emails I wrote, as
enclosed below. What FOAF seems to provides as a basis for researching
trust, confidence and fidelity is a core means for describing
semi-trustworthy propositions. When I build on that to add in the proposed
FOAF-SSL channels (that I assume to work, as claimed), it was not hard to
then go find cryptographic means that would enable the public to *probably*
find fidelity in 2 combined sources of group keying material - based on
one's trust in one of the 2 parties, and who introduces and endorse the
relatively unknown other party. The resultant fanout of confidence
generation thus felt like it completed the design loop - allowing the
infamous "6 degrees of knowingness" (or was it 7?) to be leveraged to form a
very large diameter in the span of confidence nets... that provides the
basis to gauge fidelity and assign measures of trust.  These are all core
themes of the foaf project, of course.

On 30 November 2010 22:42, Peter Williams <pwilliams at rapattoni.com> wrote:
> I feel a bit better, since the core “turn around” I used is present in 
> the author’s own  http://www.cesg.gov.uk/publications/media/rsa.pdf.
> The method in rsa.pdf might be built upon now to be useful to FOAF 
> (once subverted from its original UK intent of spying on folks, using 
> systemic covert methods).
> Since 2 entities can each now determine that a common authority 
> asserts that each of their introducers are trustworthy and are members 
> of the same foaf:group (denoted by URI), the 2 entities now can engage 
> in rdf.pdf (acting as Alice and Bob, formerly UK split-duty key 
> escrow/generating authorities). That is, they form their own 2-element 
> SUB foaf:group of URI and go make “some key”.
> Having done so, they might engage in foaf “multicast” - acting as a 
> key distribution source for today’s group key. That is, having jointly 
> spent a day calculating an N*N modulus (under the splitting rules) on 
> a commodity 2G PC (a day of a 2G PC is nothing, note
 remembering it 
> took a day to calc 512 bit personal RSA keys on a 1/2G 
> university-grade DEC workstation
 when i FIRST started
), they might 
> now share the “group key” with anyone who can induce the group-owner 
> to confirm that they are probably valid member of the group (see 
> earlier msg). By return, validated members are sent each half of the
> (cleartext) RSA parms that acts as an decrypt-KEK (key encryption 
> key), that can unwrap today’s transport encryption keys (TEKs) for the
“group messages”
> This would be useful to parties with triple stores, who want to 
> replicate the model to whomsoever are members of replication-Group G.
> My triples may be important, but no one in their right mind will be 
> trusting me (but they do trust Henry, say, on account of personal 
> knowledge). By peering with Henry, I can combine confidences so as to 
> create sufficiently trustworthy key applicable to groups of large size 
> – that I used then to replicate/broadcast my XYZ triple store.
> From: foaf-dev-bounces at lists.foaf-project.org
> [mailto:foaf-dev-bounces at lists.foaf-project.org] On Behalf Of Peter 
> Williams
> Sent: Tuesday, November 30, 2010 12:21 AM
> To: foaf-dev at lists.foaf-project.org
> Subject: [foaf-dev] playing with reputations and non secret 
> encryption, suited to the "socialization semantics" of foaf groups in 
> an FOAF+SSL context.
> Assume a foaf-group exists, has name URI, and its webserver offers the 
> group’s foaf card over FOAF+SSL. The foaf card contains statements for 
> the
> integers: a, r, and M. a is a public function of the URI (given 
> modulus M), and r is a (self-signed) signature of a. From the math 
> (see ref below) anyone can test that URI => a => r. M is claimed to be 
> unique to the group, being the multiple of two large primes (P & Q) 
> chosen at random by the owner of URI.
> Next, anyone with a webid (e.g. w) can claim to be a member of the 
> group by applying the group’s modulus M. To do so, w includes (r , Aw 
> and Rw) in her/her foaf card – where Rw is a signature of r. Taking 
> the form of the the base case, anyone can test that r => Aw => Rw, and
induce URI => Rw.
> Anyone with a webid w2 can claim to be a member of the group AND 
> recognise w as another member of the group. To do so, w2 reads Rw from 
> w’s foaf card using FOAF+SSL, and includes in his/her foaf card (Rw, 
> Aw2, Rw2). This is the inductive hyptothesis.
> In essence, we have made a reverse hash chain from M to Rw2, where M 
> is assumed to be a public identity function of a (foaf-group) URI.
> Only the real owner of the URI has P and Q factors of M, unlike all
usurpers of M.
> The FOAF+SSL design assumption holds, such that only the owner of a 
> webid can control the contents of the referenced FOAF card
> Anyone may now ask the owner of the group URI (identified by M) to 
> mint a random number t (mod M) and to state then that Rw2 is 
> trustworthy (as of
> now) for the singular purpose of recognizing  Rw1 as a group member 
> (or that Rw2 is not trusted as a recognizer of Rw1). The URI’s Owner 
> may state Rw2<Rw1>  => true/false, by publishing a counter-signature 
> statement  s for either true or false, where s = f(t, M, Aw2) = (t +
> Aw2/t) mod M
> Party verifies the counter-signed statement using Rw2 (and M). 
> Verification establishes URI is the true owner of M, is a foaf-group 
> that someone claims to belong to, and URI trusts w2 to state that w1 
> is a group member of said group.
> Based on my rather limited understanding of the one way functions in 
> http://cryptome.org/nsa-nse/nsa-nse-04.pdf,  twisted around somewhat 
> to do signing and hash chaining.
> CAVEAT: I don’t pretend to understand the complexity analysis of the 
> underlying math
, or the algebra being leveraged. More than possible 
> that in changing application function, I’ve introduced fundamental 
> flaws due to inappropriate ciphering

> _______________________________________________
> foaf-dev mailing list
> foaf-dev at lists.foaf-project.org
> http://lists.foaf-project.org/mailman/listinfo/foaf-dev

More information about the foaf-protocols mailing list