项目作者: commoncrawl

项目描述 :
Statistics of Common Crawl monthly archives mined from URL index files
高级语言: Python
项目地址: git://github.com/commoncrawl/cc-crawl-statistics.git
创建时间: 2016-07-14T08:38:12Z
项目社区:https://github.com/commoncrawl/cc-crawl-statistics

开源协议:Apache License 2.0

下载


Basic Statistics of Common Crawl Monthly Archives

Analyze the Common Crawl data to get metrics about the monthly crawl archives:

  • size of the monthly crawls, number of
    • fetched pages
    • unique URLs
    • unique documents (by content digest)
    • number of different hosts, domains, top-level domains
  • distribution of pages/URLs on hosts, domains, top-level domains
  • and …
    • mime types
    • protocols / schemes (http vs. https)
    • content languages (since summer 2018)

This is a description how to generate the statistics from the Common Crawl URL index files.

The results are presented on https://commoncrawl.github.io/cc-crawl-statistics/.

Step 1: Count Items

The items (URLs, hosts, domains, etc.) are counted using the Common Crawl index files
on AWS S3 s3://commoncrawl/cc-index/collections/*/indexes/cdx-*.gz.

  1. define a pattern of cdx files to process - usually from one monthly crawl (here: CC-MAIN-2016-26)

    • either smaller set of local files for testing
      1. INPUT="test/cdx/cdx-0000[0-3].gz"
    • or one monthly crawl to be accessed via Hadoop on AWS S3:
      1. INPUT="s3a://commoncrawl/cc-index/collections/CC-MAIN-2016-26/indexes/cdx-*.gz"
  2. run crawlstats.py --job=count to process the cdx files and count the items:

    1. python3 crawlstats.py --job=count --no-exact-counts \
    2. --no-output --output-dir .../count/ $INPUT

Help on command-line parameters (including mrjob options) are shown by
python3 crawlstats.py --help.
The option --no-exact-counts is recommended (and is the default) to save storage space and computation time
when counting URLs and content digests.

Step 2: Aggregate Counts

Run crawlstats.py --job=stats on the output of step 1:

  1. python3 crawlstats.py --job=stats --max-top-hosts-domains=500 \
  2. --no-output --output-dir .../stats/ .../count/

The max. number of most frequent thosts and domains contained in the output is set by the option
--max-top-hosts-domains=N.

Step 3: Download the Data

In order to prepare the plots, the the output of step 2 must be downloaded to local disk.
Simplest, the data is fetched from the Common Crawl Public Data Set bucket on AWS S3:

  1. while read crawl; do
  2. aws s3 cp s3://commoncrawl/crawl-analysis/$crawl/stats/part-00000.gz ./stats/$crawl.gz
  3. done <<EOF
  4. CC-MAIN-2008-2009
  5. ...
  6. EOF

One aggregated, gzip-compressed statistics file, is about 1 MiB in size. So you could just run
get_stats.sh to download the data files for all released monthly crawls.

Also the output of step 1 is provided on s3://commoncrawl/. The counts for every crawl is hold
in 10 bzip2-compressed files, together 1 GiB per crawl in average. To download the counts for one crawl:

  • if you’re on AWS and AWS CLI is installed and configured
    1. CRAWL=CC-MAIN-2022-05
    2. aws s3 cp --recursive s3://commoncrawl/crawl-analysis/$CRAWL/count stats/count/$CRAWL
  • otherwise
    1. CRAWL=CC-MAIN-2022-05
    2. mkdir -p stats/count/$CRAWL
    3. for i in $(seq 0 9); do
    4. curl https://data.commoncrawl.org/crawl-analysis/$CRAWL/count/part-0000$i.bz2 \
    5. >stats/count/$CRAWL/part-0000$i.bz2
    6. done

Step 4: Plot the Data

To prepare the plots using the downloaded aggregated data:

  1. gzip -dc stats/CC-MAIN-*.gz | python3 plot/crawl_size.py

The full list of commands to prepare all plots is found in plot.sh. Don’t forget to install the Python
modules required for plotting.

Step 5: Local Site Preview

The crawl statistics site is hosted by Github pages. The site is updated as soon as plots or description texts are updated, committed and pushed to the Github repository.

To preview local changes, it’s possible to serve the site locally:

  1. build the Docker image with Ruby, Jekyll and the content to be served
    1. docker build -f site.Dockerfile -t cc-crawl-statistics-site:latest .
  2. run a Docker container to serve the site preview

    1. docker run --network=host --rm -ti cc-crawl-statistics-site:latest

    The site should be served on localhost, port 4000 (http://127.0.0.1:4000).
    If not, the correct location is shown in the output of the docker run command.

    If running this on a Mac, you may find that the loopback interface (127.0.0.1) within the container is not accessible, so you can change the line in the Dockerfile to:

    1. CMD bundle exec jekyll serve --host 0.0.0.0

    … and then the site will be served on http://0.0.0.0:4000 instead. (You will of course need to rebuild the Docker image after updating the Dockerfile.)

The columnar index
simplifies counting and analytics a lot - easier to maintain, more transparent, reproducible and
extensible than running two MapReduce jobs, see the the list of example