Building a Bot with Bot Framework and LUIS

Jan Kratochvíl
5 min readJul 2, 2017

Chihaya is a simple bot I built to help me learn Japanese. It can look up translations and definitions, translate kanji to kana and a few other things.

It’s open-sourced on GitHub for everyone to check out. You can also try it on Skype. In this post I’d like to share a few bits about the experience of building it.

Used Tech

  • Bot Framework — A c# and NodeJS framework to build bots with. It gives you an opinionated way of building a bot and the ability to deploy to a ton of platforms at once, including Messenger, Skype, Mail, Kik, Slack and more.
  • LUIS — A “conversational intelligence” service. It extracts intent and entities from user’s statements and feeds the result to your bot. Very easy to set up with statements like set {settingName} to {settingValue}.Integrates well with Bot Framework
  • ASP .NET Core — A website building technology for c# devs. Super fast compared to traditional ASP .NET
  • Microsoft Azure — Where you can host the bot. Integrates with Visual Studio really well

Setting Up

Even though most of the samples use traditional ASP .NET, ASP .NET Core is also supported and works without any extra steps. For my scenario ASP .NET Core yielded a much better performance, which is critical. Users will expect your bot to react fast, otherwise it’ll make for a frustrating experience.

One confusion you might encounter when using ASP .NET Core is where to put the MicrosoftAppId and MicrosoftAppPassword configuration. Adding it to appSettings inside app.config is the way to go.

My project structure ended up looking like this:

Project Structure

Dependency Injection

For Dependency Injection, Autofac is a good choice. Microsoft uses it in all their samples, which made it a go-to choice for the Bot Framework.

Because the Bot Framework is meant to be stateless, it expects all dialogs to be Serializable. For dependency injection, this is a problem as the injected services are not always serializable themselves.

The way you can solve this is to add a special key when registering your dependencies in the Autofac container. This will tell the Bot Framework to not attempt deserialization of those types and leave the job to the container.

The Conversation Loop

Dialogs are the main component of your Bot. Every dialog handles a specific part of conversation and acts a bit like a controller, mediating between the user and business logic embedded in your services.

There are two kinds of dialogs:

  • Luis dialogs. These inherit from LuisDialog. They call LUIS APIs to infer user’s intent and then call a specific method within the dialog based on the inference
  • Regular dialog. These implement IDialog. They can use more traditional means of inferring user intent (e.g. using Regex, listed options, …)

In Chihaya, a single LUIS dialog is at the root of all conversations. When the user asks for something, LUIS will infer the intent and forward the conversation to another dialog that will fulfill user’s request. Here’s how the process works:

Hand off between LUIS dialogs and regular dialogs

Luis dialogs look differently from regular dialogs. While a regular dialog has a single entry point, a LUIS dialog has different entry points depending on user’s intent. Here are shortened samples for both:

A regular dialog that handles settings
A LUIS-based dialog that looks for lookup and translation intents and forwards the message to an appropriate regular dialog.

Integrating LUIS and regular dialogs can be a bit challenging at times as some of the types used differ and this can be a problem when forwarding to conversation between the two types of dialogs.

This can be mostly solved by directly injecting the LUIS-extracted values to regular dialogs’ properties instead of passing the whole LuisResult.

Don’t Depend on LUIS

One of the main lessons that I learned is that you should not rely on AI to always work as expected, even in simple scenarios. Sometimes LUIS fails to recognize very simple utterances like “translate how are you?”. You’re code needs to take this into account.

For Chihaya, whenever LUIS fails to guess the intent or entities, I attempt to guess them myself before admitting defeat with a “I don’t understand” message.

There are many ways to do this, you can start with a very simple regular expression to double check that the utterance doesn't start with known keywords like “translate”.

Use the Typing Indicator to Convey Latency

If your bot communicates with an external service, you should use the typing indicator to your advantage. Sending a typing indicator to the conversation is a great way to convey that your bot is actively working on giving the user a result.

Not sending the typing indicator can cause a lot of confusion, since the user has no idea if the bot is “thinking …”, did not understand or is simply not working anymore.

I would even go as far as to recommend sending the typing indicator by default when a message enters the dialog code. You can do that easily by inheriting from a DialogBase that can look something like this:

Sending a typing indicator in all dialogs is a good idea

Don’t Try to Be Smart

To close up, I want to mention something that was said multiple times at this year’s Build conference:

When building Bots, resist the temptation to try to make them smart.

It’s very tempting to try to make your bot as smart as possible. Parse everything with an AI-based API, support conversations that go multiple levels deep and be able to make great conversation. My bot is going to be so smart!

In practice, trying to make that happen will create a frustrating experience. The bot will randomly not understand simple phrases, the user will get stuck within deep conversations without knowing how to get out and posting gifs when your bot fails will not make it better.

Better to build a simpler bot that gives the user clear tips on what they can and cannot do. Sure, it’ll take some of the magic away, but your will bot will end up being faster and more consistent, which is a much bigger benefit long-term.

--

--