Using Machine Learning to Reduce Toxicity Online
Perspective uses machine learning models to identify abusive comments. The models score a phrase based on the perceived impact the text may have in a conversation. Developers and publishers can use this score to give feedback to commenters, help moderators more easily review comments, or help readers filter out “toxic We define toxicity as a rude, disrespectful, or unreasonable comment that is likely to make someone leave a discussion.” language.
Perspective models provide scores for several different attributes. In addition to the flagship Toxicity attribute, here are some of the other attributes Perspective can provide scores for:
- Severe Toxicity
- Identity attack
- Sexually explicit
To learn more about our ongoing research and experimental models, visit our Developers site.
Perspective API is free and available to use in English, Spanish, French, German, Portuguese, Italian and Russian. The team is constantly developing models to support new languages.
To learn more about international publishers and platforms using Perspective API, check out our case studies. To learn more about languages in development, visit the Developers site.
How to customize Perspective to your needs
Looking to learn more? Visit our Developers site for more technical information.