项目作者: R-aryan

项目描述 :
Identify and classify toxic online comments,based on kaggle dataset
高级语言: Python
项目地址: git://github.com/R-aryan/Jigsaw-Toxic-Comment-Classification.git
创建时间: 2021-07-04T19:18:28Z
项目社区:https://github.com/R-aryan/Jigsaw-Toxic-Comment-Classification

开源协议:MIT License

下载


Jigsaw Toxic Comment Classification

Identify and classify toxic online comments

  • End to End NLP Multi label Classification problem
  • The Kaggle dataset can be found Here Click Here

Steps to run the project Click Here

Dataset Description

We are provided with a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. The types of toxicity are:

  • toxic
  • severe_toxic
  • obscene
  • threat
  • insult
  • identity_hate

The Goal is to create a model which predicts a probability of each type of toxicity for each comment.

Following are the screenshots for the output, and the request.

  • Request sample
    Sample request



  • Response Sample

Sample response