More and more websites have started to embed structured data describing products, people, organizations, places, events into their HTML pages using markup standards such as RDFa, Microdata and Microformats.
The Web Data Commons project extracts this data from several billion web pages. The project provides the extracted data for download and publishes statistics about the deployment of the different formats.
- 2013-11-12: Web Data Commons releases large Hyperlink Graph covering 3.5 billion web pages and 128 billion hyperlinks between these pages.
- 2013-09-02: Paper on Web Data Commons accepted at ISWC 2013 Conference in Sydney: Deployment of RDFa, Microdata, and Microformats on the Web – A Quantitative Analysis.
- 2013-07-12: New analysis available about the types of products that are offered by e-shops using Microdata markup.
- 2013-07-05: Yahoo! Research releases Glimmer search engine which enables you to search Web Data Commons data. Details.
- 2012-12-10: Extraction results from the August 2012 Common Crawl corpus available for download.
- 2012-06-29: We have created a new analysis on vocabulary usage in our Microdata and RDFa dataset.
- 2012-06-20: Presentation on Web Data Commons and our Extraction Process at the AWS Summit 2012 Berlin - Slides
- 2012-04-16: Paper on Web Data Commons presented at the LDOW 2012 Workshop (References)
- 2012-03-22: Extraction results from the February 2012 Common Crawl corpus available for download
- 2012-03-13: Extraction results from the 2009/2010 Common Crawl corpus available for download
1. About Web Data Commons
More and more websites embed structured data describing for instance products, people, organizations, places, events, resumes, and cooking recipes into their HTML pages using markup formats such as RDFa, Microdata and Microformats. The Web Data Commons project extracts all Microformat, Microdata and RDFa data from the Common Crawl web corpus, the largest and most up-to-data web corpus that is currently available to the public, and provide the extracted data for download in the form of RDF-quads and also in the form of CSV-tables for common entity types (e.g. product, organization, location, ...). In addition, we calculate and publish statistics about the deployment of the different formats as well as the vocabularies that are used together with each format.
We have extracted all RDFa, Microdata and Microformats data from the August 2012 and the 2009/2010 Common Crawl corpus. Webpages are included into the Common Crawl corpora based on their PageRank score, thereby making the crawls snapshots of the current popular part of the Web. For the future, we plan to rerun our extraction on a regular basis as new Common Crawl corpora are becoming available.
2. Extracted Data Formats
The table below provides an overview of the different structured data formats that we extract from the Common Crawl. The table contains references to the specifications of the formats as well as short descriptions of the formats. Web Data Commons packages the extracted data for each format separately for download. The table also defines the format identifiers that are used in the following.
|RDFa||RDFa is a specification for attributes to express structured data in any markup language, e.g HTML. The underlying abstract representation is RDF, which lets publishers build their own vocabulary, extend others, and evolve their vocabulary with maximal interoperability over time.|
|HTML Microdata||Microdata allows nested groups of name-value pairs to be added to HTML documents, in parallel with the existing content.|
|hCalendar Microformat||hCalendar is a calendaring and events format, using a 1:1 representation of standard iCalendar (RFC2445) VEVENT properties and values in HTML.|
|hCard Microformat||hCard is a format for representing people, companies, organizations, and places, using a 1:1 representation of vCard (RFC2426) properties and values in HTML.|
|Geo Microformat||Geo a 1:1 representation of the "geo" property from the vCard standard, reusing the geo property and sub-properties as-is from the hCard microformat. It can be used to markup latitude/longitude coordinates in HTML.|
|hListing Microformat||hListing is a proposal for a listings (UK English: small-ads; classifieds) format suitable for embedding in HTML.|
|hResume Microformat||The hResume format is based on a set of fields common to numerous resumes published today on the web embedded in HTML.|
|hReview Microformat||hReview is a format suitable for embedding reviews (of products, services, businesses, events, etc.) in HTML.|
|hRecipe Microformat||hRecipe is a format suitable for embedding information about recipes for cooking in HTML.|
|Species Microformat||The Species proposal enables marking up taxonomic names for species in HTML.|
|XFN Microformat||XFN (XHTML Friends Network) is a simple format to represent human relationships using hyperlinks.|
3. Extraction Results
3.1. Extraction Results from the August 2012 Common Crawl Corpus
The August 2012 Common Crawl Corpus is available on Amazon S3 in the bucket
aws-publicdatasets under the key prefix
|Crawl Date||January-June 2012|
|Total Data||40.1 Terabyte||(compressed)|
|Parsed HTML URLs||3,005,629,093|
|URLs with Triples||369,254,196|
|Domains in Crawl||40,600,000|
|Domains with Triples||2,286,277|
- Detailed Statistics for the August 2012 corpus (HTML)
- Additional Statistics and Analysis for the August 2012 corpus (HTML)
- Download Instructions to access the Data
The costs for parsing the 40.1 Terabytes of compressed input data of the August 2012 Common Crawl corpus, extracting the RDF data and storing the extracted data on S3 totaled 398 USD in Amazon EC2 fees. We used 100 spot instances of type
c1.xlarge for the extraction which altogether required 5,636 machine hours.
3.2. Extraction Results from the February 2012 Common Crawl Corpus
Common Crawl did publish a pre-release version of its 2012 corpus in February. The pages contained in the pre-release are a subset of the pages contained in the August 2012 Common Crawl corpus. We also extracted the structured data from this pre-release. The resulting statistics are found here, but are superseded by the August 2012 statistics.
3.3. Extraction Results from the 2009/2010 Common Crawl Corpus
The 2009/2010 Common Crawl Corpus is available on Amazon S3 in the bucket
aws-publicdatasets under the key prefix
|Crawl Dates||Sept 2009 (4 TB)|
Jan 2010 (6.9 TB)
Feb 2010 (4.3 TB)
Apr 2010 (4.4 TB)
Aug 2010 (3.6 TB)
Sept 2010 (6 TB)
|Total Data||28.9 Terabyte||(compressed)|
|Parsed HTML URLs||2,565,741,671|
|Domains with Triples||19,113,929|
|URLs with Triples||147.871.837|
The costs for parsing the 28.9 Terabytes of compressed input data of the 2009/2010 Common Crawl corpus, extracting the RDF data and storing the extracted data on S3 totaled 576 EUR (excluding VAT) in Amazon EC2 fees. We used 100 spot instances of type
c1.xlarge for the extraction which altogether required 3,537 machine hours.
4. Example Data
For each data format, we provide a small subset of the extracted data below for testing purposes. The data is encoded as N-Quads, with the forth element used to represent the provenance of each triple (the URL of the page the triple was extracted from). Be advised to use a parser which is able to skip invalid lines, since they could present in the data files.
5. Extraction Process
Since the Common Crawl data sets are stored in the AWS Simple Storage Service (S3), it made sense to perform the extraction in the Amazon cloud (EC2). The main criteria here is the cost to achieve a certain task. Instead of using the ubiquitous Hadoop framework, we found using the Simple Queue Service (SQS) for our extraction process increased efficiency. SQS provides a message queue implementation, which we use to co-ordinate the extraction nodes. The Common Crawl dataset is readily partitioned into compressed files of around 100MB each. We add the identifiers of each of these files as messages to the queue. A number of EC2 nodes monitor this queue, and take file identifiers from it. The corresponding file is then downloaded from S3. Using the ARC file parser from the Common Crawl codebase, the file is split into individual web pages. On each page, we run our RDF extractor based on the Anything To Triples (Any23) library. The resulting RDF triples are then written back to S3 together with the extraction statistics, which are later collected. The advantage of this queue is that messages have to be explicitly marked as processed, which is done after the entire file has been extracted. Should any error occur, the message is requeued after some time and processed again.
Any23 parses web pages for structured data by building a DOM tree and then evaluates XPath expressions to find structured data. While profiling, we found this tree generation to account for much of the parsing cost, and we have thus searched for a way to reduce the number of times this tree is built. Our solution is to run (Java) regular expressions against each webpages prior to extraction, which detect the presence of a microformat in a HTML page, and then only run the Any23 extractor when the regular expression find potentional matches. The formats html-mf-hcard, html-mf-hcalendar, html-mf-hlisting, html-mf-hresume, html-mf-hreview and html-mf-recipe define unique enough class names, so that the presence of the class name in the HTML document is ample indication of the Microformat being present. For the remaining formats, the following table shows the used regular expressions.
6. Source Code
The source code can be checked out from our Subversion repository. Afterwards, create your own configuration by copying
src/main/resources/ccrdf.properties, then fill in your AWS authentication information and bucket names. Compilation is performed using Maven, thus changing into the source root directory and typing
mvn install should be sufficient to create a build. In order to run the extractor on more than 10 EC2 nodes, you will have to request an EC2 instance limit increase for your AWS account. More information about running the extractor is provided in the file
The Web Data Commons extraction framework can be used under the terms of the Apache Software License.
Web Data Commons is a joint effort of the Research Group Data and Web Science at the University of Mannheim (Christian Bizer, Robert Meusel, Michael Schuhmacher, Johanna Völker, Kai Eckert) and the Institute AIFB at the Karlsruhe Institute of Technology (Andreas Harth, Steffen Stadtmüller). The initial version of the extraction code was written by Hannes Mühleisen, now working at CWI in Amsterdam.
Lots of thanks to
- the Common Crawl project for providing their great web crawl and thus enabling Web Data Commons.
- the Any23 project for providing their great library of structured data parsers.
- Christian Bizer, Kai Eckert, Robert Meusel, Hannes Mühleisen, Michael Schuhmacher, and Johanna Völker: Deployment of RDFa, Microdata, and Microformats on the Web - A Quantitative Analysis. Accepted paper at the ISWC In-Use Track 2013 in Sydney, Australia.
- Hannes Mühleisen, Christian Bizer: Web Data Commons - Extracting Structured Data from Two Large Web Corpora. In Proceedings of the WWW2012 Workshop on Linked Data on the Web (LDOW2012).
- Peter Mika, Tim Potter: Metadata Statistics for a Large Web Corpus. In Proceedings of the WWW2012 Workshop on Linked Data on the Web (LDOW2012).
- Peter Mika: Microformats and RDFa deployment across the Web. Blog Post.
- Class Statistics from the Sindice data search engine.