项目作者: polkovnik-z

项目描述 :
Fully native robots.txt parsing component without any dependencies.
高级语言: JavaScript
项目地址: git://github.com/polkovnik-z/robots-txt-component.git
创建时间: 2020-11-24T21:51:10Z
项目社区:https://github.com/polkovnik-z/robots-txt-component

开源协议:MIT License

下载


robots-txt-component

CircleCI

Lightweight robots.txt parsing component without any external dependencies for Node.js.

Installation

Via NPM

  1. npm install robots-txt-component --save

Getting started

Before using the parser, you need to initialize it like below:

  1. const RobotsParser = require('robots-txt-component');
  2. ...
  3. let robots = new RobotsParser('https://www.example.com', true) // allowOnNeutral = true
  4. await robots.init() // Will attempt to retrieve the robots.txt file natively and parse it.

Check for allowance and other usages:

  1. let robots = new RobotsParser('https://www.example.com', true) // # allowOnNeutral = true
  2. await robots.init() // # Will attempt to retrieve the robots.txt file natively and parse it.
  3. userAgent = '*'
  4. if (robots.canCrawl(url, userAgent)) { // # Will check the url against all the rules found in robots.txt file
  5. // # Your logic
  6. }
  7. // # get the crawl delay for a user agent
  8. let crawlDelay = robots.getCrawlDelay('Bingbot')
  9. // # get raw robots.txt content
  10. let content = robots.getRawContent()

API

Robots

constructor(url, allowOnNeutral = true, rawRobotsTxt = null)

url: domain of which robots.txt file you want to use.

allowOnNeutral: if the same amount of allows and disallows exist for the a url, do we allow or disallow ?

rawRobotsTxt: if you already have retrieved the raw robots.txt content, provide it here.

async init()

void

Must be called and awaited before performing any other action.
This method attempts to retrieve the robots.txt file from the provided url.

getRawContent()

string | null

This method returns the raw robots.txt file content.

canCrawl(url, userAgent)

boolean

Checks the rules for the url path and the user agent in the robots.txt file and returns access policy.

getCrawlDelay(userAgent)

integer

Checks if crawl-delay is defined for the user agent and returns it if defined, if not, returns 0.

LICENSE

  1. MIT License
  2. Copyright (c) 2020 Ibragim Abubakarov
  3. Permission is hereby granted, free of charge, to any person obtaining a copy
  4. of this software and associated documentation files (the "Software"), to deal
  5. in the Software without restriction, including without limitation the rights
  6. to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
  7. copies of the Software, and to permit persons to whom the Software is
  8. furnished to do so, subject to the following conditions:
  9. The above copyright notice and this permission notice shall be included in all
  10. copies or substantial portions of the Software.
  11. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  12. IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  13. FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  14. AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  15. LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
  16. OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
  17. SOFTWARE.