one website exists for the sole purpose of monitoring endpoint reliability<\/a>, an obvious concern of those who build services that rely on Linked Data sources. Recently, the LII made a decision to run its own mirror of the DrugBank triplestore to eliminate problems with uptime and to guarantee low latency; performance and accessibility had become major concerns. For consumers, due diligence is important.<\/span><\/strong><\/strong><\/li>\n<\/ul>\nFor us, there is a distinctly different feel to the examples that Dodds, Flemming, and others have used to illustrate their criteria; they seem to be looking at a set of phenomena that has substantial overlap with ours, but is not quite the same. \u00a0Part of it is simply the fact, mentioned earlier, that data publishers in distinct domains have distinct biases. For example, those who can\u2019t fully believe in objectivity are forced to put greater emphasis on provenance. Others who are not publishing descriptive data that relies on human judgment feel they can rely on more \u00a0\u201cobjective\u201d assessment methods. \u00a0But the biggest difference in the \u201cnew quality\u201d is that it puts a great deal of emphasis on technical quality in the construction of the data model, and much less on how well the data that populates the model describes real things in the real world. \u00a0<\/span><\/p>\nThere are three reasons for that. \u00a0The first has to do with the nature of the discussion itself. All quality discussions, simply as discussions, seem to neglect notions of factual accuracy because factual accuracy seems self-evidently a Good Thing; there\u2019s not much to talk about. \u00a0Second, the people discussing quality in the LOD world are modelers first, and so quality is seen as adhering primarily to the model itself. \u00a0Finally, the world of the Semantic Web rests on the assumption that \u201canyone can say anything about anything\u201d, For some, the egalitarian interpretation of that statement reaches the level of religion, making it very difficult to measure quality by judging whether something is factual or not; from a purist\u2019s perspective, it\u2019s opinions all the way down. \u00a0There is, then, a tendency to rely on formalisms and modeling technique to hold back the tide.<\/span><\/p>\nIn 2004, we suggested a set of metadata-quality indicators suitable for managers to use in assessing projects and datasets. \u00a0An updated version of that table would look like this:<\/span><\/p>\n\u00a0<\/span><\/p>\n\n\n\nQuality Measure<\/span><\/td>\n | Quality Criteria<\/span><\/td>\n<\/tr>\n\nCompleteness<\/span><\/td>\n | Does the element set completely describe the objects?<\/span> \nAre all relevant elements used for each object?<\/span> \nDoes the data contain everything you expect?<\/span> \nDoes the data contain <\/span>only<\/span> what you expect?<\/span><\/td>\n<\/tr>\n\nProvenance<\/span><\/td>\n | Who is responsible for creating, extracting, or transforming the metadata?<\/span> \nHow was the metadata created or extracted?<\/span> \nWhat transformations have been done on the data since its creation?<\/span> \nHas a dedicated provenance vocabulary been used?<\/span> \nAre there authenticity measures (eg. digital signatures) in place?<\/span><\/td>\n<\/tr>\n\nAccuracy<\/span><\/td>\n | Have accepted methods been used for creation or extraction?<\/span> \nWhat has been done to ensure valid values and structure?<\/span> \nAre default values appropriate, and have they been appropriately used?<\/span> \nAre all properties and values valid\/defined?<\/span><\/td>\n<\/tr>\n\nConformance to expectations<\/span><\/td>\n | Does metadata describe what it claims to?<\/span> \nDoes the data model describe what it claims to?<\/span> \nAre controlled vocabularies aligned with audience characteristics and understanding of the objects?<\/span> \nAre compromises documented and in line with community expectations?<\/span><\/td>\n<\/tr>\n\nLogical consistency and coherence<\/span><\/td>\n | Is data in elements consistent throughout?<\/span> \nHow does it compare with other data within the community?<\/span> \nIs the data model technically correct and well structured?<\/span> \nIs the data model aligned with other models in the same domain?<\/span> \nIs the model consistent in the direction of relations?<\/span><\/td>\n<\/tr>\n\nTimeliness<\/span><\/td>\n | Is metadata regularly updated as the resources change?<\/span> \nAre controlled vocabularies updated when relevant?<\/span><\/td>\n<\/tr>\n\nAccessibility<\/span><\/td>\n | Is an appropriate element set for audience and community being used?<\/span> \nIs the data and its access methods well-documented, with exemplary queries and URIs?<\/span> \nDo things have human-readable labels?<\/span> \nIs it affordable to use and maintain?<\/span> \nDoes it permit further value-adds?<\/span> \nDoes it permit republication?<\/span> \nIs attribution required if the data is redistributed?<\/span> \nAre human- and machine-readable licenses available?<\/span><\/td>\n<\/tr>\n\nAccessibility — technical<\/span><\/td>\n | Are reliable, performant endpoints available?<\/span> \nWill the provider guarantee service (eg. via a service level agreement)?<\/span> \nIs the data available in bulk?<\/span> \nAre URIs stable?<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\nThe differences in the example questions reflect the differences of approach that we discussed earlier. Also, the new approach separates criteria related to <\/span> | | | | | | | | |