Articles on: Interactive modules
This article is also available in:

Comment and Ideation Nudging

A better and healthier online debate thanks to comment and ideation nudging: with this feature, participants get live feedback on the quality of their response.


How does it work?


Thanks to artificial intelligence, we detect potentially offensive or abusive language, and encourage participants to sufficiently substantiate a response. Participants already get this feedback while responding.


Comment Nudging

Where and when does the nudging shows?


Nudging shows on open text fields: reactions (comments) and long text fields (within an ideation or topic form). Nudges do not show with short text fields (for example "title of your idea") or closed fields (dropdown, select ...).


Do we stop reactions or entries?


No, we do not stop reactions or entries. Any input can be submitted: after all, artificial intelligence can make mistakes. Comments that are potentially toxic do get a ‘to moderate’ label. As a manager or moderator, you are then informed of this via an e-mail notification. So you can quickly adjust or intervene if necessary!


How do I manage comment and ideation nudging?


You cannot manage the artificial intelligence itself. However, you can choose to activate or deactivate AI nudging for your project. Go to your project settings >> ideation >> expert settings and uncheck ‘comment and ideation nudging’.


Manage ‘comment nudging’ at ideation's expert settings

How does moderation technology with AI works exactly?


AI technology helps to recognise harmful or hateful content automatically. To do this, we combine smart software (artificial intelligence) with input from humans.


We work with Textgain, which develops its own technology and AI model. The technology only has access to the content of the processed messages, and not to further user data. Textgain is a Belgian company and a European reference in the field of online hate speech.


People from different backgrounds help in its development, by making lists of words and expressions often used in negative contexts. These words are given a score and a label to indicate how harmful they are (toxicity).


Regularly, Textgain's developers meet with a diverse group of people to discuss new trends, words or context. Through this collaboration, we ensure that the technology remains fair and AI bias in algorithms is minimised. By doing so, people are always in control.


Want to know more? Read the background article here. ****

Updated on: 25/06/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!