Natural Language Processing (NLP) lies at the heart of every AI-type chatbot conversation and FrontM’s Apps are no different.
In creating a bespoke App that uses FrontM’s platform, the developer must add intents that are synonymous with behaviours – the act of booking a hotel, say, or ordering an in-flight meal - enacted in order to achieve a particular end.
In order to facilitate FrontM’s Intent Execution Hierarchy, the platform calls upon third-party NLP engines. Frontm.js can integrate with a variety of engines - from Google Dialogflow to AWS Lex , Rasa , GPT-3 and more. Here, we explain how the integration process works.
NLP Identification
The Frontm.js scripts created by the developer connect to the NLP engine using an NLP id. The id must first be configured using FrontM’s Dev Tools to access Dialogflow (or whichever engine you prefer). The id is alphanumeric and added to the intents. Developers needn’t worry about the detail of this process.
For information on the configuration process, please see the page Connect a Dialogflow NLP agent.
Adding an NLP engine to your App
let main = new Intent('main');
main.nlpId = 'MyNLP';
In the above example, an NLP engine (the id of which has already been configured using Dev Tools) is added to an App.
Accessing the NLP engine’s answers and parsed data
The NLP engine is, in effect, a lower level bot that the App's Intent Execution Hierarchy can call on if needed. In other words, if the developer’s script (specifically the Intent class’s onMatching closure) does not match the user’s message with any of the stored intents, the App will default to the engine.
The Intent object interacts with an NLP engine via the nlpResults object, which can hold the following properties:
- action: This is the identifier of the intent that the NLP engine has matched to the user’s message. It is alphanumeric and evaluated during the matching process to give the correct context, since the same action (e.g. the user typing "Yes") can have different intents (e.g. deleting a customer or adding a new user);
- nlpParameters: When the NLP engine parses the user’s input, it is placed in the nlpParameters object. e.g. If the user types "Hello George!" then nlpParameters.name = "George";
- speech: If the NLP engine has directly delivered an answer, it is passed in this field;
- nlpSuggestions: This is an array of strings used if the NLP engine needs to send follow-up questions as suggestions to the user. These suggestions will take precedence over any suggestions scripted by the developer.
Developers can access these properties using the state class method getNlpResultsForId():
let q3 = new Intent('topUp10');
q3.setEnglishSmartSuggestions(['Top Up $10']);
q3.onMatching = () => {
return q3.matchSuggestions(state.messageFromUser) ||
(state.getNlpResultsForId('MyNlp').action === 'topUp' && state.getNlpResultsForId('MyNlp').nlpResults.value === 10);
};
In the example above, the intent q3 matches when the user touches the Smart Suggestion or when the NLP engine resolves the action 'topUp' and the value parsed by the NLP is 10. The property 'value' is defined within the NLP engine; if it did not add a property with this name, the result would be 'undefined'.
Using multiple NLPs with an App
NLPs training inevitably become more problematic the more complicated the conversational context. In other words, it is harder to train an NLP engine to deal with questions that might have similar wording but totally different meanings. Therefore, having two NLPs - each with more specific training (i.e. the ability to resolve fewer intents) instead of only one with a more complex training - can give us better results. To add multiple NLPs via the respective intents:
let newIntent = new Intent('newIntent');
newIntent.nlpId = 'MyOtherNLP';
Comments
0 comments
Please sign in to leave a comment.