Integrating RASA NLU in your python code

Samrudha Kelkar
tech-that-works
Published in
2 min readAug 21, 2019

--

Do you know about the latest RASA changes? For those who don’t know, RASA is an open-source bot-building platform which comes with awesome components to develop the enterprise-grade bot.

Their stack is divided now into RASA X and simple RASA. Do you know that apart from building a bot, you can also use RASA for doing interesting NLP tasks like Named Entity Recognition, Text classification? We can make use of only the NLU part of the stack to train the model for these tasks. RASA comes up with a detailed guide to use it in NLU-only manner.

But how to run this model? We are asked to run the below command in your rasa terminal.

rasa run --enable-api -m models/<name-of-your-model>.tar.gz

It creates a server which can be accessed with endpoint like below

curl localhost:5005/model/parse -d '{"text":"<content-of-your-text-to-pass>"}'

Sure, running a server helps you in having your model decoupled from the other pipeline. But this is not how a tinkerer might want to integrate the model into his pipeline. For example, if you want to do some post-processing like say combining output with a few other models (which can be non-rasa ones), then? Having a separate network call to get the output of rasa is not what you want to do all the time. Correct?

Below snippet is a small hack to use the model trained in RASA stack to be used in your python code, as a standalone function. It will avoid a network call and give you much more control on the output.

  1. Untar your model.tar.gz file and go to the folder that is created. (It should have a folder called nlu)
  2. Copy the path of the model (till the nlu) located in the rasa_model_path variable in the code
  3. Use the rasa_output function anywhere in your code where you expect the output from the model
Code snippet for using the model as a stand-alone function

Code can be accessed here. Shoot a message if you have any doubt.

--

--