Articles on: Interactive modules
This article is also available in:

Comment Nudging

A better and healthier online debate thanks to comment nudging: with this feature, participants get live feedback on the quality of their response.

How does it work?


Thanks to artificial intelligence, we detect potentially offensive or abusive language, and encourage participants to sufficiently substantiate a response. Participants already get this feedback while responding.

Comment Nudging
Do we stop reactions?


No, we do not stop reactions. Any reaction can be submitted: after all, artificial intelligence can make mistakes. Comments that are potentially toxic do get a ‘to moderate’ label. As a manager or moderator, you are then informed of this via an e-mail notification. So you can quickly adjust or intervene if necessary!

How do I manage comment nudging?


You cannot manage the artificial intelligence itself. However, you can choose to activate or deactivate comment nudging for your project. Go to your project settings >> ideation >> expert settings and uncheck ‘comment nudging’.

Manage ‘comment nudging’ at ideation's expert settings

How does moderation technology with AI works exactly?



AI technology helps to recognise harmful or hateful content automatically. To do this, we combine smart software (artificial intelligence) with input from humans.

We work with Textgain, which develops its own technology and AI model. The technology only has access to the content of the processed messages, and not to further user data. Textgain is a Belgian company and a European reference in the field of online hate speech.

People from different backgrounds help in its development, by making lists of words and expressions often used in negative contexts. These words are given a score and a label to indicate how harmful they are (toxicity).

Regularly, Textgain's developers meet with a diverse group of people to discuss new trends, words or context. Through this collaboration, we ensure that the technology remains fair and AI bias in algorithms is minimised. By doing so, people are always in control.

Want to know more? Read the background article here.

Updated on: 10/02/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!