The detector provides a detailed analysis of the textβs toxicity, indicating the presence and levels of general toxicity, obscene language, insults, threats, and identity hate. The detector returns a list of dictionaries, each containing the following information:
score: The score of the text on a scale of 0 to 100, with higher scores indicating a higher level of toxicity.
type: The type of toxicity detected, such as βobsceneβ, βinsultβ, βthreatβ, or βidentity_hateβ.