Skip to content

Latest commit

 

History

History
15 lines (10 loc) · 491 Bytes

README.md

File metadata and controls

15 lines (10 loc) · 491 Bytes

Toxicity-Classification

Build a model to identify toxic statements and reduce bias in classification

Dataset obtained from Kaggle competition: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview

Language Used: Python 3

The project contains a Jupyter Notebook containing 3 models to solve the Toxicity Classification problem:

  1. Logistic Regression
  2. Random Forests
  3. Gradient Boosted Machines

Model is achieving accuracy of ~92% in all the 3 models.