Using machine learning to reduce toxicity online
Perspective API by Jigsaw can help mitigate toxicity and ensure healthy dialogue online.
How it worksToxicity online poses a serious challenge for platforms and publishers. Online abuse and harassment silence important voices in conversation, forcing already marginalized people offline.
What is Perspective?
Perspective is a free API that uses machine learning to identify "toxic We define toxicity as a rude, disrespectful, or unreasonable comment that is likely to make someone leave a discussion." comments, making it easier to host better conversations online.
See how it works for yourself:
Type a sentence into the box to see its toxicity score. Perspective returns a percentage that represents the likelihood that someone will perceive the text as toxic.
Is this comment toxic?
disagree?
- toxic
- obscene
- insulting
- threatening
* Please use this demo only if you are older than 13. Please do not enter personally identifiable information.
What can you build with Perspective?
Publishers, platforms, and individuals can use Perspective to power a variety of different use cases, in comment sections, forums, or any text-based conversations. Developers integrate and customize Perspective for many different audiences.
For moderators
Moderators use Perspective to quickly prioritize and review comments that have been reported.
For commenters
Perspective can give feedback to commenters who post toxic comments.
For readers
Developers create tools so readers can control which comments they see, like hiding comments.
Why should you use Perspective?
Perspective has been shown to increase engagement by helping platforms and publishers create safe environments for conversation, and by helping individuals make healthier contributions online.
Enables healthy conversations
Reduces toxicity and abusive behavior
Free. Self-Serve. Customizable.
Get started today
Developers
Learn how to integrate the API and start building today.
Get startedPartners
Read our guide to starting with Perspective.
Get startedResearchers
See our academic research, datasets, and open source code.
Get startedThis model was trained by asking people to rate internet comments on a scale from "Very toxic" to "Very healthy" contribution. We define "toxic" as "a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion".