Assesses content for harmful or toxic language, ensuring text does not contain offensive, abusive, or harmful language to individuals or groups.
actual_output
: The text content to be checked for harmful or inappropriate contentResult
: Value in the continuous range [0, 1]Reasoning
: Explanation of toxicity assessment