[foaf-dev] [foaf-protocols] FOAF sites offline during cleanup

Kingsley Idehen kidehen at openlinksw.com
Wed Apr 29 23:08:15 CEST 2009


Hugh Glaser wrote:
> Thanks Kingsley,
>
>
> On 29/04/2009 12:20, "Kingsley Idehen" <kidehen at openlinksw.com> wrote:
>
>   
>> Hugh,
>>
>> I absolutely understand your concern.
>>     
> Thanks mate.
>   
>> To cut a long story short, how would you suggest we describe what we
>> have? 
>>     
> I think not as in your earlier post :-) :
> "We are now nearing complete stability re uploads, deletes, and data
> cleansing activity re. the Virtuoso instance hosting the LOD Cloud [1]."
>   
>> What about the following:
>>
>> 1. A collection of most of the data from the LOD-Cloud pictorial
>> 2.  LOD-Cloud sample
>>     
> I would suggest you describe it is a "mirror of some of the LOD data".
> This will get across the idea that it is only partial, and that it is of
> necessity out of synchronisation with the original sources, but is clearly
> more than a sample. (I don't know if it is most - have you done some
> investigations?)
>   
>>> I really don't want to be reviewing/seeing papers in a few months time where
>>> people are presenting analysis they claim to have done of the "LOD cloud" or
>>> similar, and they have based their data gathering on the misconception that
>>> all they have to do is look at your cloud.
>>>  
>>>       
>> Neither do I, but I have expressly called out to everyone that has
>> contributed to the LOD-Cloud (warehouse) to verify what's been loaded so
>> far. Sadly, deafening silence until we make any kind of claim.
>>     
> This is the way of the world - you seem to have some expectation that the
> most valuable use of my time is to keep checking to see what you have put in
> your system. As you say, this LOD work is non-trivial, and we all have a
> zillion other things to do.
> However, as you know, when you have asked or made claims I have sometimes
> gone and looked, as I did this time; but it can be pretty time consuming to
> go through someone else's store looking to sample to see how many of the
> URIs you expect to find are not in fact there. And each time you ask it
> becomes more of a chore.
> But I think that actually the onus is on the claimant to do some of their
> own analysis and justification before making the claims.
> For example, you might go through our void:exampleResource (or even the ones
> that you already have in your system) to sample how many of them for which
> you have the rdf.
>   

Hugh,

As per private mail,

We will use VoiD graphs that accompany RDF dumps as a control mechanism 
as part of the post data load activities. If no differences are 
established during the control test, but the generated VoiD graph 
differs from the one with source, we use the one with source (which 
reduces the VoiD graph generation tedium).

Others: the current count descrepancy is due to us filtering out graphs 
with < 10K triples.

I think this matter is closed, we'll qualify of claims where such is 
required re. LOD-Cloud :-)

Kingsley
> Best
> Hugh
>   
>> As you
>> know this work is non trivial (in all respects).
>>
>> It would be really sad if the easy part of providing dataset
>> verification feedback for our instance becomes the reason for it to
>> stagnate and ultimately wither away (we do have a zillion other things
>> to do with our time, seriously).
>>
>> The goal of what we call the LOD-Cloud instance is to provide the Linked
>> Data Web will a powerful faceted browsing and entity information lookup
>> solution based on Linked Data. To date we haven't even seen DBpedia
>> replicas let alone what we now have. Both are significant validators of
>> the Linked Data Web in general.
>>
>> I can assure you, I didn't have academic papers in mind when
>> commissioning either of these endeavors.
>>
>> Kingsley
>>
>>     
>
>
>   


-- 


Regards,

Kingsley Idehen	      Weblog: http://www.openlinksw.com/blog/~kidehen
President & CEO 
OpenLink Software     Web: http://www.openlinksw.com






More information about the foaf-dev mailing list