Parsing Common Crawl data with Scrapy.
Parsing Huge Web Archive files from Common Crawl data index to fetch any required domain’s data concurrently with Python and Scrapy.
Common Crawl is a nonprofit 501 organization that crawls the web and freely provides its archives and datasets to the public. Common Crawl’s web archive consists of petabytes of data collected since 2011. It completes crawls generally every month.
Common Crawl currently stores the crawl data using the Web ARChive (WARC) format.
Before that point, the crawl was stored in the ARC file format.
The WARC format allows for more efficient storage and processing of Common Crawl’s free multi-billion page web archives, which can be hundreds of terabytes in size.
Below are the different types of files available on Common Crawl:
I have parsed the WARC files of Oct 2020 to gather URLs and Titles of all pages for a given domain. This can be changed according to the use case.
Code can be executed in two ways -
scrapy crawl ccrawl -o filename.csv -a domain=your_domain.com
python run_prog.py
Note :