Extracting Structured Data from the Common Web Crawl
Christian Bizer
Robert Meusel
Anna Primpeli

More and more websites have started to embed structured data describing products, people, organizations, places, events into their HTML pages using markup standards such as RDFa, Microdata and Microformats.
The Web Data Commons project extracts this data from several billion web pages. The project provides the extracted data for download and publishes statistics about the deployment of the different formats.

Contents

1. About Web Data Commons

More and more websites embed structured data describing for instance products, people, organizations, places, events, resumes, and cooking recipes into their HTML pages using markup formats such as RDFa, Microdata and Microformats. The Web Data Commons project extracts all Microformat, Microdata and RDFa data from the Common Crawl web corpus, the largest and most up-to-data web corpus that is currently available to the public, and provide the extracted data for download in the form of RDF-quads. In addition, we calculate and publish statistics about the deployment of the different formats as well as the vocabularies that are used together with each format.

Up till now, we have extracted all RDFa, Microdata and Microformats data from the following releases of the Common Crawl web corpora:

For the future, we plan to rerun our extraction on a regular basis as new Common Crawl corpora are becoming available.

Below, you find information about the extracted data formats and detailed statistics about the extraction results. In addition we have analyzed trends in the deployment of the most widely spread formats as well as in the deployment of selected RDFa and Microdata classes. This analysis can be found here.

2. Extracted Data Formats

The table below provides an overview of the different structured data formats that we extract from the Common Crawl. The table contains references to the specifications of the formats as well as short descriptions of the formats. Web Data Commons packages the extracted data for each format separately for download. The table also defines the format identifiers that are used in the following.

FormatDescriptionIdentifier
RDFa RDFa is a specification for attributes to express structured data in any markup language, e.g HTML. The underlying abstract representation is RDF, which lets publishers build their own vocabulary, extend others, and evolve their vocabulary with maximal interoperability over time. html-rdfa
HTML Microdata Microdata allows nested groups of name-value pairs to be added to HTML documents, in parallel with the existing content. html-microdata
hCalendar Microformat hCalendar is a calendaring and events format, using a 1:1 representation of standard iCalendar (RFC2445) VEVENT properties and values in HTML. html-mf-hcard
hCard Microformat hCard is a format for representing people, companies, organizations, and places, using a 1:1 representation of vCard (RFC2426) properties and values in HTML. html-mf-hcard
Geo Microformat Geo a 1:1 representation of the "geo" property from the vCard standard, reusing the geo property and sub-properties as-is from the hCard microformat. It can be used to markup latitude/longitude coordinates in HTML. html-mf-geo
hListing Microformat hListing is a proposal for a listings (UK English: small-ads; classifieds) format suitable for embedding in HTML. html-mf-hlisting
hResume Microformat The hResume format is based on a set of fields common to numerous resumes published today on the web embedded in HTML. html-mf-hresume
hReview Microformat hReview is a format suitable for embedding reviews (of products, services, businesses, events, etc.) in HTML. html-mf-hreview
hRecipe Microformat hRecipe is a format suitable for embedding information about recipes for cooking in HTML. html-mf-recipe
Species Microformat The Species proposal enables marking up taxonomic names for species in HTML. html-mf-species
XFN Microformat XFN (XHTML Friends Network) is a simple format to represent human relationships using hyperlinks. html-mf-xfn

3. Extraction Results

3.1. Extraction Results from the December 2014 Common Crawl Corpus

The December 2014 Common Crawl Corpus is available on Amazon S3 in the bucket aws-publicdatasets under the key prefix /common-crawl/crawl-data/CC-MAIN-2014-52/ .

Extraction Statistics


Crawl DateWinter 2014
Total Data160 Terabyte(compressed)
Parsed HTML URLs2,014,175,679
URLs with Triples620,151,400
Domains in Crawl15,668,667
Domains with Triples2,722,425
Typed Entities5,516,068,263
Triples20,484,755,485

Format Breakdown


3.2. Extraction Results from the November 2013 Common Crawl Corpus

The November 2013 Common Crawl Corpus is available on Amazon S3 in the bucket aws-publicdatasets under the key prefix /common-crawl/crawl-data/CC-MAIN-2013-48/ .

Extraction Statistics


Crawl DateWinter 2013
Total Data44 Terabyte(compressed)
Parsed HTML URLs2,224,829,946
URLs with Triples585,792,337
Domains in Crawl12,831,509
Domains with Triples1,779,935
Typed Entities4,264,562,758
Triples17,241,313,916

Format Breakdown


3.3. Extraction Results from the August 2012 Common Crawl Corpus

The August 2012 Common Crawl Corpus is available on Amazon S3 in the bucket aws-publicdatasets under the key prefix /common-crawl/parse-output/segment/ .

Extraction Statistics


Crawl DateJanuary-June 2012
Total Data40.1 Terabyte(compressed)
Parsed HTML URLs3,005,629,093
URLs with Triples369,254,196
Domains in Crawl40,600,000
Domains with Triples2,286,277
Typed Entities1,811,471,956
Triples7,350,953,995

Format Breakdown


Extraction Costs

The costs for parsing the 40.1 Terabytes of compressed input data of the August 2012 Common Crawl corpus, extracting the RDF data and storing the extracted data on S3 totaled 398 USD in Amazon EC2 fees. We used 100 spot instances of type c1.xlarge for the extraction which altogether required 5,636 machine hours.

3.3b Extraction Results from the February 2012 Common Crawl Corpus

Common Crawl did publish a pre-release version of its 2012 corpus in February. The pages contained in the pre-release are a subset of the pages contained in the August 2012 Common Crawl corpus. We also extracted the structured data from this pre-release. The resulting statistics are found here, but are superseded by the August 2012 statistics.

3.4. Extraction Results from the 2009/2010 Common Crawl Corpus

The 2009/2010 Common Crawl Corpus is available on Amazon S3 in the bucket aws-publicdatasets under the key prefix /common-crawl/crawl-002/ .

Extraction Statistics


Crawl DatesSept 2009 (4 TB)
Jan 2010 (6.9 TB)
Feb 2010 (4.3 TB)
Apr 2010 (4.4 TB)
Aug 2010 (3.6 TB)
Sept 2010 (6 TB)
Total Data28.9 Terabyte(compressed)
Total URLs2,804,054,789
Parsed HTML URLs2,565,741,671
Domains with Triples19,113,929
URLs with Triples147.871.837
Typed Entities1,546,905,880
Triples5,193,276,058

Format Breakdown


Extraction Costs

The costs for parsing the 28.9 Terabytes of compressed input data of the 2009/2010 Common Crawl corpus, extracting the RDF data and storing the extracted data on S3 totaled 576 EUR (excluding VAT) in Amazon EC2 fees. We used 100 spot instances of type c1.xlarge for the extraction which altogether required 3,537 machine hours.

3.5. Trends 2012 to 2014

In the following, we analyze trends in the deployment of the most widely spread formats as well as in the deployment of selected RDFa and Microdata classes based on the 2012, 2013 and 2014 data sets.
It is important to mention that the three corresponding CommonCrawl web corpora have different sizes (2 billion to 3 billion HTML pages), cover different amounts of websites (12 million PLDs to 40 million PLDs, selection by “importance� of PLD) and also only partly overlap in the covered HTML pages. Thus, the following trends must be interpreted with caution.

Adoption by Format

The diagram below shows the total number of pay-level domains (PLD) making use of one of the three most widely spread markup formats (RDFa, Microdata and Microformat hCard) within the three crawls. Although it seems the total number of domains using Microformats hCard has decreased, one has to keep in mind, that the first crawl contains 50% more HTML pages than the latter two. For Microdata and especially schema.org, we find an increase in deployment since 2012. The second diagram shows how the amount of triples that we extracted from the crawls for has developed between 2012 and 2014.

Adoption of Selected RDFa Classes

n the following we report trends in the adoption of selected RDFa classes. The first diagram shows the number of PLDs using each class. The second diagram shows the total number of entities of each class contained in the WDC RDFa data sets. We see that the number of websites deploying the Facebook Open Graph Protocol classes og:article and og:product as well as foaf:Document stays approximately constant. The deployment of og:website and gd:breadcrumb is increasing.

Adoption of Selected Schema.org Classes

Below, we analyze the development of the adoption of schema.org classes embedded using the Microdata syntax. The two diagrams below show again the deployment of those classes by number of deploying PLDs and number of entities extracted from the crawls. We can see a continuous increase in the number of PLDs adopting the schema.org classes. This is also reflected in the number of entities within the datasets, where the slight decrease of the two classes PostalAddress and LocalBusiness might originate from the characteristics of the third crawl, which contains a similar number of pages as the second crawl, but covers a larger number of PLDs. This has likely resulted in a more shallow coverage of websites that contain schema.org data and thus in a smaller number of extracted entities.

4. Example Data

For each data format, we provide a small subset of the extracted data below for testing purposes. The data is encoded as N-Quads, with the forth element used to represent the provenance of each triple (the URL of the page the triple was extracted from). Be advised to use a parser which is able to skip invalid lines, since they could present in the data files.

5. Extraction Process

Since the Common Crawl data sets are stored in the AWS Simple Storage Service (S3), it made sense to perform the extraction in the Amazon cloud (EC2). The main criteria here is the cost to achieve a certain task. Instead of using the ubiquitous Hadoop framework, we found using the Simple Queue Service (SQS) for our extraction process increased efficiency. SQS provides a message queue implementation, which we use to co-ordinate the extraction nodes. The Common Crawl dataset is readily partitioned into compressed files of around 100MB each. We add the identifiers of each of these files as messages to the queue. A number of EC2 nodes monitor this queue, and take file identifiers from it. The corresponding file is then downloaded from S3. Using the ARC file parser from the Common Crawl codebase, the file is split into individual web pages. On each page, we run our RDF extractor based on the Anything To Triples (Any23) library. The resulting RDF triples are then written back to S3 together with the extraction statistics, which are later collected. The advantage of this queue is that messages have to be explicitly marked as processed, which is done after the entire file has been extracted. Should any error occur, the message is requeued after some time and processed again.

Any23 parses web pages for structured data by building a DOM tree and then evaluates XPath expressions to find structured data. While profiling, we found this tree generation to account for much of the parsing cost, and we have thus searched for a way to reduce the number of times this tree is built. Our solution is to run (Java) regular expressions against each webpages prior to extraction, which detect the presence of a microformat in a HTML page, and then only run the Any23 extractor when the regular expression find potentional matches. The formats html-mf-hcard, html-mf-hcalendar, html-mf-hlisting, html-mf-hresume, html-mf-hreview and html-mf-recipe define unique enough class names, so that the presence of the class name in the HTML document is ample indication of the Microformat being present. For the remaining formats, the following table shows the used regular expressions.

FormatRegular Expression
html-rdfa(property|typeof|about|resource)\\s*=
html-microdata(itemscope|itemprop\\s*=)
html-mf-xfn<a[^>]*rel\\s*=\\s*(\"|')[^\"']*(contact|acquaintance|friend|met|co-worker|colleague|co-resident|neighbor|child|parent|sibling|spouse|kin|muse|crush|date|sweetheart|me)
html-mf-geoclass\\s*=\\s*(\"|')[^\"']*geo
html-mf-speciesclass\\s*=\\s*(\"|')[^\"']*species

6. Source Code

The source code can be checked out from our Subversion repository. Afterwards, create your own configuration by copying src/main/resources/ccrdf.properties.dist to src/main/resources/ccrdf.properties, then fill in your AWS authentication information and bucket names. Compilation is performed using Maven, thus changing into the source root directory and typing mvn install should be sufficient to create a build. In order to run the extractor on more than 10 EC2 nodes, you will have to request an EC2 instance limit increase for your AWS account. More information about running the extractor is provided in the file readme.txt .

7. License

The extracted data is provided according the same terms of use, disclaimer of warranties and limitation of liabilities that apply to the Common Crawl corpus.

The Web Data Commons extraction framework can be used under the terms of the Apache Software License.

8. Feedback

Please send questions and feedback to the Web Data Commons mailing list or post them in our Web Data Commons Google Group.

More information about Web Data Commons is found here.

9. Credits

Web Data Commons has started as a joint effort of the Freie Univeristät Berlin and the Institute AIFB at the Karlsruhe Institute of Technology in early 2012. Now it is mainly maintained by the Research Group Data and Web Science at the University of Mannheim.

We thank our former contributors for the help and support:

Also lots of thanks to

Web Data Commons is supported by the PlanetData and LOD2 research projects.

PlanetData Logo    LOD2 Logo

10. References