This file contains the detailed extraction report of the extraction of 2009/2010 of the Web Data Commons project.
All our data and used code is available as download.
The extracted structured data is provided for download in the N-Quads RDF encoding and divided according to the format the data was encoded in. Files are compressed using GZIP and split after reaching a size of 100MB. Overall, 410 files with a total size of 40GB were produced.
List of download URLs for RDF from the 2009/2010 corpus (Example Content)
The extracted RDF data can be downloaded using wget with the command
wget -i http://webdatacommons.org/downloads/2010-09/nquads/files.list
The extracted microformat data are also available for download as CSV tables. The SPARQL queries used for generating the CSV tables are available as well.
|CSV Table||SPARQL Query|
To provide a general overview about the URLs using structured data and the to link back to the Common Crawl .arc files the detailed extraction statistic can be used. The extraction statistics record the amount of structured data found for each URL from the crawl data. Be advised to use a parser which is able to skip invalid lines, since they could present in the tab-separated files. The table contains the following columns (not in this order):
uri- The URL of the crawled page (e.g.
hostIp- The IP address of the computer the page was crawled from (e.g.
mimeType- The MIME type of the page as communicated by the web server (e.g.
timestamp- Time and date when the page was crawled as UTC UNIX timestamp (e.g.
recordLength- Size of the HTML content in Bytes (e.g.
arcFileName- Name of the Common Crawl archive file containing the page (e.g.
arcFilePos- Byte offset of the page inside the archive file (e.g.
detectedMimeType- MIME type as detected by the extractor (e.g.
html-*- Number of triples found on the page for each extractor identifier (e.g.
totalTriples- Number of all triples found on this HTML page (e.g.
Sample Extraction Statistic File (csv)
Extraction Statistic File (csv)