The agent allows you to train a model, load, and use it. It is a facade to access most of Rasa Core’s functionality using a simple API.
Not all functionality is exposed through methods on the agent. Sometimes you need to orchestrate the different components (domain, policy, interpreter, and the tracker store) on your own to customize them.
Here we go:
Agent(domain, policies=None, interpreter=None, tracker_store=None)¶
Public interface for common things to do.
This includes e.g. train an assistant, or handle messages with an assistant.
continue_message_handling(sender_id, executed_action, events)¶
Continue to process a messages.
Predicts the next action to take by the caller
Handle messages coming from the channel.
handle_message(text_message, message_preprocessor=None, output_channel=None, sender_id=u'default')¶
Handle a single message.
If a message preprocessor is passed, the message will be passed to that function first and the return value is then used as the input for the dialogue engine.
The return value of this function depends on the output_channel. If the output channel is not set, set to None, or set to CollectingOutputChannel this function will return the messages the bot wants to respond.
>>> from rasa_core.agent import Agent >>> agent = Agent.load("examples/restaurantbot/models/dialogue", ... interpreter="examples/restaurantbot/models/nlu/current") >>> agent.handle_message("hello") [u'how can I help you?']
load(path, interpreter=None, tracker_store=None, action_factory=None)¶
Load a persisted model from the passed path.
load_data(resource_name, remove_duplicates=True, augmentation_factor=20, max_number_of_trackers=2000, tracker_limit=None, use_story_concatenation=True)¶
Load training data from a resource.
Persists this agent into a directory for later loading and usage.
Start to process a messages, returning the next action to take.
Toggles the memoization on and off.
If a memoization policy is present in the ensemble, this will toggle the prediction of that policy. When set to false the Memoization policies present in the policy ensemble will not make any predictions. Hence, the prediction result from the ensemble always needs to come from a different policy (e.g. KerasPolicy). Useful to test prediction capabilities of an ensemble when ignoring memorized turns from the training data.
Train the policies / policy ensemble using dialogue data from file.
- training_trackers – trackers to train on
- kwargs – additional arguments passed to the underlying ML trainer (e.g. keras parameters)
train_online(training_trackers, input_channel=None, max_visual_history=3, **kwargs)¶
visualize(resource_name, output_file, max_history, nlu_training_data=None, should_merge_nodes=True, fontsize=12)¶