|Description||Troubleshooting NLP issues - so that customers can self service NLP issues before reaching GCS.|
|Version as of||8.2|
|Capability/Industry Area||Customer Service|
How to troubleshoot NLP issues with email, chat and messaging.
Text analytics or NLP (Natural language processing) issues fall under Text categorization (Topic, Small talk, Sentiment and Language) and Text extraction (Entities, keywords and auto tags). Most of the issues can be handled in the channel interface using Dev Studio. However, the advanced issues may require access to Prediction Studio. This article mainly covers the major NLP issues that can be addressed using the channel interface itself.
Symptoms of a potential NLP issue
- Topic is not detected
- Topic is detected but the case is not assigned
- Multiple mentions of the topic are not detected or I am not able to restrict the action to the first occurrence of the topic
- Language is not detected
- Model not detecting topic
- Case properties are not detected
- Feedback data not reflecting
- Signature and greetings in email body part of entities detected
In Chat, Messaging and Live chat
- Small talk is not detecting
- Topic not detected for an incoming chat request
- Escalation to agent not getting detected
Step-by-step debugging process
The debugging process steps include the following:
- Test the channel: Every channel Email or chat has a test button. Most listener or channel-related issues are isolated with this test. Check whether the right topics and entities are getting detected along with the confidence scores.
- Test the text analyzer: Open Channel > Text Analyzer > iNLP > Settings > "Open text analyzer rule". If the error shows up in this test interface, then the error is in the channel interface.
- Test the topic and entity models: The models are essentially decision data rules. Individually, the models can be tested by launching the models from the text analyzer.
- Test the model settings: Models operate at sentence level and document levels. In the test interface, check the results as per the model usage in production. Accordingly compare the confidence score results.
- Test the confidence scores: Model confidence score cutoffs(typically >70%) are used as a filter to remove low score topics and entities.
Other known issues
- The language is not detected: This usually happens when the input text is short and may not be a complete example. The expectation is that the input text has at least more than 4 words in the sentence.
- Incorrect language is detected: If there are a combination of languages in the input text, then the resulting language may not be correctly detected. In the text analyzer advanced settings, you can configure the NLP engine to fall back on a language when no language is created.
- Training lost after instance restart: The text analytics repository may be pointing to a temporary directory and can result in data loss. Update the repository in Prediction Studio.