项目作者: codefornola

项目描述 :
A project to scrape the assessor's website and make the data accessible for advanced queries
高级语言: Python
项目地址: git://github.com/codefornola/assessor-scraper.git
创建时间: 2017-07-14T00:15:52Z
项目社区:https://github.com/codefornola/assessor-scraper

开源协议:MIT License

下载


assessor-scraper

The goal of this project is to transform the data from the Orleans Parish
Assessor’s Office website into formats that
are better suited for data analysis.

development environment setup

prerequisites

You must have Python 3 installed. You can download it
here.

first setup a python virtual environment

  1. python3 -m venv .venv
  2. . .venv/bin/activate

install the dependencies with pip

  1. pip install -r requirements.txt

Getting started

Set up the database

By default, the scraper is setup to load data into a PostgreSQL database.
Docs on setting up and making changes to the database are here.
You can quickly get the database running locally using Docker.

  1. docker-compose up -d db

If you want to explore how to extract data using scrapy, use the scrapy
shell
to interactively
work with the response.

For example,

  1. scrapy shell http://qpublic9.qpublic.net/la_orleans_display.php?KEY=1500-SUGARBOWLDR
  2. owner = response.xpath('//td[@class="owner_value"]/text()').get()
  3. total_value = response.xpath('//td[@class="tax_value"]/text()')[3].get().strip()
  4. next_page = response.xpath('//td[@class="header_link"]/a/@href').get()

Get all the parcel ids

Getting a list of parcel ids allows us to build urls for every property
so we can scrape the data for that parcel. These parcel ids are used
in the url like http://qpublic9.qpublic.net/la_orleans_display.php?KEY=701-POYDRASST,
where 701-POYDRASST is the parcel id.

Running the parcel_id_extractor.py script will cleverly use the owner search to
extract all available parcel ids, then save them in a file parcel_ids.txt.

The file is checked in to the repo, but if you want to run it yourself
to update it with the latest, run

  1. python parcel_id_extractor.py

Running the spider

Running the spider from the command line will crawl the assessors website and
output the data to a destination of your choice.

By default, the spider will output data to a postgres database, which is configured
in scraper/settings.py. You can use a hosted postgres instance or run one locally using
Docker:

Important Note: Scraping should always be done responsibly so check the robots.txt file to ensure the site doesn’t explicitly instruct crawlers to not crawl. Also when running the scraper, be careful not to cause unexpected load to the assessors website - consider running during non-peak hours or profiling the latency to ensure you aren’t overwhelming the servers.

To run the spider,

  1. scrapy runspider scraper/spiders/assessment_spider.py

Warning: this will take a long time to run…you can kill the process with ctrl+c.

To run the spider and output to a csv

  1. scrapy runspider scraper/spiders/assessment_spider.py -o output.csv

Running on Heroku

Set required environment variables:

  1. heroku config:set DATABASE_URL=postgres://user:pass@host:5432/assessordb

You can run the scraper on Heroku by scaling up the worker dyno:

  1. heroku ps:scale worker=1

See the Heroku docs for more info on how to deploy Python code.

Running in aws with Terraform

1) Install terraform
2) cd terraform
3) terraform init
4) terraform plan
5) terraform apply
6) ssh ubuntu@{public_dns}