AI that manages rough conversation on social media

AI that manages rough conversation on social media

Cyber Security 3410923 640eLearningworld NewsRead It On Applenews Badge Rgb Us Uk

Bullying, threats and other failures in web-conversations is continually a big problem on social media. Some people seem to sway out of their personality in the real-world into an evil avatar when they entering different forms of web-conversations often with strangers. Human administrators of websites are in most cases always at least one step behind in detecting and managing abuses, and then the damage is already done. A team from Cornell University and Wikimedia have worked out a solution to this deficit that is presented in the paper “Conversations Gone Awry: Detecting Early Signs of Conversational Failure”. It is a machine learning-system that are detecting patterns in real time web-conversations to predict when the discussion is about to take the path into the “dark web” of abuse. The system even from the start is able to produce early-warnings of anti-social behaviour with the focus on trolling, hate speech, harassment and personal attacks. The AI, therefore, is analysing signs of misbehaviour, not facts. However, humans are still better on this tasks where about 72 percent was successful at the task, compared to AI 61.6 percent. But while an algorithm can manage all conversations on a website, the human ability is more limited. Source: MIT Technology Review and Arxiv.org 

Are You a food and wellness entusiast? If yes, this community is for you!Create blog posts, discuss on the forum, create interactive books about food. It's free!

Opens in a new tab