A Live Chat Troll Shield™ protects human agents from harassment and time-wasting chats

We developed an AI-powered Troll Shield™ to protect customer support agents from experiencing harassment during live chats, as well as ensuring they spend their valuable time supporting users who want and need help.

Scroll down to read the full case ↓
2.5k
TROLLS BLOCKED
96%
ACCURACY RATE
200k
TROLL MESSAGES
the problem

Online interactions can bring out the worst in some people

Online interactions often bring out the worst in people. Since the pandemic, customer service workers face more abuse, and Support Team Leads now manage teams regularly subjected to trolling, harassment, and threats.

Our client saw this happening in real time via their live chat. Concerned about the toll on employee mental health, and the time wasted on troll, they wanted to use AI to intercept abusive users before reaching human agents.

the solution

Use AI to detect, classify and block trolls

We use conversational AI to understand the user's intent and classify different types of trolling behavior, including:

  • Insults
  • Threats
  • Hate speech (Specifically targeting race and religion)
  • Profanity

How we handle trolls depends on the company, type of behavior and frequency within a conversation. We usually start with a warning, but if it continues, we can go as far as blocking access to human support.

Trolls can still use the bot like anyone else, but they'll never connect with a live agent.

A conversation where the user is classified as a troll vs. a conversation where the user is NOT classified as a troll

Supervised feedback

We review flagged troll interactions monthly to catch false positives and spot new behavior patterns. This feedback loop helps us continuously improve the Troll Shield™ with real-world insights.

A human-supervised feedback loop ensures the Troll Shield™ is continuously improving

Combining NLP and programmatic generative AI

Programmatic generative AI allows us to send messages below a certain confidence threshold to a large language model (LLM). The LLM then evaluates the message and returns a True OR False value of whether it's a troll and if True, then also sends the classification category.

The LLM evaluates messages below a given threshold

Chat vs. Voice

While we initially designed the Troll Shield™ for chatbots, its underlying architecture also integrates seamlessly with voicebots. By analyzing transcripts of real-time calls, the system can detect abusive language and trigger protective measures for phone agents. This ensures consistent safety and time savings across text and voice channels.

the results

Chatbots that won't stand for harassment

Over the last two years, we implemented the Troll Shield™ across nine of our clients. Since then, it has detected around 200k troll messages and blocked around 2.5k trolls.

The use of programmatic AI in the Troll Shield™ reduced false negatives by 11% and false positives by 45%, resulting in a troll detection accuracy rate of 96%.

"Remember that not every troll needs to be blocked. Sometimes calling someone out on their poor behavior is enough. We regularly see users apologizing to bots!"
- Marjorie Allen, Head of Creative & UX

Talk with a human

If you want to discuss AI in more detail, then reach out to Alexis.

He's ready to chat in French, English and Greek.

Talk with Alexis →