Using machine learning to reduce toxicity online
Perspective API by Jigsaw can help mitigate toxicity and ensure healthy dialogue online.
How it worksToxicity online is an existential problem for platforms and publishers. Online abuse and harassment silence important voices in conversation, forcing already marginalized people offline.
What is Perspective?
Perspective is a free API that uses machine learning to identify "toxic We define toxicity as a rude, disrespectful, or unreasonable comment that is likely to make someone leave a discussion." comments, making it easier to host better conversations online.
See how it works for yourself:
Type a sentence into the box to see its toxicity score. Perspective returns a percentage that represents the likelihood that someone will perceive the text as toxic. See 'Create Custom Demo' to customize the toxicity thresholds, edit styles, and get simple plug-and-play code.
Is this comment toxic?
disagree?
- toxic
- obscene
- insulting
- threatening
What can developers build with Perspective?
Perspective API is helping platforms and publishers of all sizes support, healthy, engaging and productive conversations. Developers integrate and customize Perspective API for many different audiences.
For moderators
Moderators use Perspective to quickly prioritize and review comments that have been reported.
For commenters
Perspective can give feedback to commenters who post toxic comments.
For readers
For readers Developers create tools so readers can control which comments they see, for example hiding comments that may be abusive or toxic.
Why use Perspective?
Perspective has been shown to increase engagement by helping platforms and publishers create safe environments for conversation, and by helping individuals make healthier contributions online.
Enables healthy conversations
Reduces toxicity and abusive behavior
Free, self-serve, customizable
Get started today
Developers
Learn how to integrate the API and start building today.
Get startedPartners
Read our guide to starting with Perspective.
Get startedResearchers
See our academic research, datasets, and open source code.
Get startedThis model was trained by asking people to rate internet comments on a scale from "Very toxic" to "Very healthy" contribution. We define "toxic" as "a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion".