项目作者: greird

项目描述 :
🦎 A python 2 Twitter crawler using the Twitter search API.
高级语言: Python
项目地址: git://github.com/greird/tweet-crawler.git
创建时间: 2018-04-24T18:26:57Z
项目社区:https://github.com/greird/tweet-crawler

开源协议:MIT License

下载


Tweet Crawler 🦎

This is a simple Twitter crawler that will fetch every tweets from the past week on a given topic (or more depending on your access to the Twitter API).

The last tweet id is stored in the config file so that if you launch the same query again, it will start crawling tweets from the last retrieved and add them to the same csv file.

Tweets are stored in a csv file.

Configuration

The crawler depends on the twitter package.

  1. pip install twitter

Create a config.cfg file next to tweet_crawler.py and fill it with the following code.

  1. [twitter-api]
  2. CONSUMER_KEY=
  3. CONSUMER_SECRET=
  4. ACCESS_TOKEN_KEY=
  5. ACCESS_TOKEN_SECRET=

Complete the file with your very own Twitter API Keys and Tokens.

Usage

python tweet_crawler.py in your terminal to launch the crawler and follow the instructions.

If manually interrupted, it will automatically save tweets to a csv before shutting down.