This documentation explains how to set up and configure RASA NLU for your server. (In our case, we used NGINX)

Before diving in - installation

Set up rasa NLU and the backends

pip install rasa_nlu
pip install -U spacy”

python -m spacy download en
python -m spacy download fr

pip install -U scikit-learn scipy sklearn-crfsuite

See more @https://nlu.rasa.ai/installation.html

Create the datasets

data.json files

These files contain all the essentiel datas to develop the chatbot. It includes texts to be interpreted, in structured data (intent/entities).


data_en.json was obtained on the website luis.ai where we defined different intent and entities for our chatbot, then we exported our bot to a .json file. From this website and the two following ones, you can also make XMLHTTP requests to parse texts and obtain structured data, as a remplacement to RASA NLU. But, with RASA :

To create this data.json file, different tools are also available and compatible with rasa_nlu :

See RASA migration doc for more informations


Also you can use a tool from rasa to edit your training exemples
This is the tool we used for data_fr.json
It is available there (git repos)
It also offers an online version
Or you can also install it with npm (see the github repos)


config.json files

The config.json files contains all the different informations for running the HTTP server and different informations about one project that is to say you need ONE config file PER project you may run on your HTTP SERVER. You can use the same port for diffent project, but you’ll need to precise the project you want to use for the text to parse (see #Run a Request)
see more @ https://nlu.rasa.ai/config.html#section-configuration
Pay attention to this file. If not configured well, you may have problems.
For our project, we have two config files. One for the english language and another for the french language.

project :

@ https://nlu.rasa.ai/config.html#project


@ https://nlu.rasa.ai/config.html#fixed-model-name


@ https://nlu.rasa.ai/config.html#pipeline


@ https://nlu.rasa.ai/config.html#language


@ https://nlu.rasa.ai/config.html#path


@ https://nlu.rasa.ai/config.html#data
(has to be changed if you want to use another dataset)

Train the model

To train both datasets, use the commands

python -m rasa_nlu.train -c config/config_fr.json
python -m rasa_nlu.train -c config/config_en.json

Where the argument -c is the adress of the .json file to train

It creates all the data necessary for the chatbot in the folder
(the name of the directories can be changed by changing the config.json files).

See more @https://nlu.rasa.ai/tutorial.html#tutorial

Run the server - use the model

python -m rasa_nlu.server -c config/config_en.json

See more @https://nlu.rasa.ai/http.html#running-the-server

Run a request

On navigator :

port (5000)

5000 here is the port where runs the RASA NLU API. You can change it in the config files.


Everything on the right of “parse?q=” is the sentence to parse : it will be the text from the user, that will be converted into structured data.


If you run multiple projects, everything on the right of “&project=” specifies the project which will be used to analyze the text.
See more @https://nlu.rasa.ai/http.html#running-the-server
and @https://nlu.rasa.ai/config.html

On cmd :

Make sure to install curl and mjson


How to run the HTTP rasa Server permanently

nohup ./run.sh

don’t forget to run “chmod +x run.sh” to have the rights

And it will run the server in background, even if you close the SSH connexion (putty for example)
and will write the logs by default in nohup.out

ps -ef

And just run

kill [PID]

where [PID] is the PID of the process you want to stop.

@see https://fr.wikipedia.org/wiki/Nohup


see licence.txt file or this page