Great chatbot rush: how to create a Loebner-level bot and get noticed

Table of contents

    Get a free consultation

    The chatbot market is extremely crowded and full of big-dollar players right now. Something's cooking. But is it the narrow AI revolution we have been hoping for?

    According to F8 2017, Facebook alone hosts more than 34,000 bots within its Messenger platform, and new bots are probably being added by the hundreds every day.

    Our name is legion

    Chatbots are everywhere. The market is so crowded right now that it looks like The Next Big Thing.

    Facebook, Microsoft, and IBM have all released their own solutions to the ongoing explosive growth, providing developers with tools to host and improve chatbots even further.

    Weak AI (or narrow AI, as some call it) has been around for decades. So why now? The Emerline team talked to Bruce Wilcox, a prominent chatbot creator who shed some light on the industry challenges.

    The winner of 4 Loebner competition awards, Bruce has been building chatbots and working on narrow and general AI for several decades now.

    We asked him how he sees the industry today.

    There’s a chatbot for that!

    Bruce Wilcox: In the past, chatbots were all about conversation. The field was divided more or less between academia and tech enthusiasts.

    Siri was one of the first bots of the new generation. It answered user questions with some level of effort. Siri was released in 2011, quite a long time ago, but its development effectively languished for years.

    Then Alexa came along inside Amazon Echo in 2015 and handled music. I was deeply impressed with Alexa's far field technology that allows you to speak to it across the room, even where there’s a lot of noise. That was a stunning achievement against the noise levels that are normally a problem.

    And then last year suddenly everybody had a bot platform or was trying to create bots to do things. Travel agents, weather, news—the world of bots exploded. But these are not conversational bots; they are bots that you can give orders to.

    And most of them are barely competent because developers haven’t got a clue what effort they are setting themselves up for.


    You know nothing, Jon Snow!

    When a developer creates a new bot, they need to choose from 3 to 10 intentions as base functionality. This means they need around 1,000 sample sentences per intention.

    And a bot developer generally doesn’t have that data lying around and probably doesn’t know they need that much data in the first place. So what they do is think of all the phrases they can come up with, train the bot, and release it.

    And then the rest of the world thinks of everything else they forgot.


    Image credit: https://xkcd.com/1838/

    Bot developers usually lack a good system for chatting. Machine learning is not ideal for every application since developers often don't have enough data to feed the system so it can learn properly.

    Machine learning is also useless if you need to create a personality. And in numerous contexts in the business world, personality is extremely valuable.

    Trading developers for English majors?

    Most chatbots are created purely by software engineers, an approach that has a few disadvantages:

    • Programmers are not writers. They lack an in-depth understanding of their own language.
    • Programmers can't write a persona. They can’t think of all the different ways you can say something, like a marketer or an English major could.

    High-quality chatbot development requires a full team. To do the job well, you have to blend the skills of multiple people: programmers, UI/UX designers, and copywriters. It’s hard to do as one person.

    But in the chat world at the moment, most of the contributors are just developers—not a development team with writers.

    Building a capable, Loebner-level bot

    When it first started in the 90s, the Loebner prize competition was narrowly focused in time and in category. It was as short as 2.5 minutes per interaction.

    The judges were also very limited in which topics they could choose to talk about.

    Over time, the Loebner approach has broadened. Right now, it’s 25 minutes of trying to fool a judge. And there are no restrictions on what the judge can ask or say to the chatbot.

    My own biggest challenge as a chatbot creator has always been the same—. It’s the unlimited scope of conversation. A conversation can run anywhere. It might begin on some thread of some subject that the chatbot can understand, but then the human can take it in a completely new direction. As humans in conversations do!

    So giving the chatbot enough breadth or the ability to stall, change, and appear to know what it is you are talking about is always the deepest problem.

    It all comes down to understanding context and being able to give sensible remarks.

    A chatbot is not like a human being.

    It has no practical experience of the world. We typically say that if you want to build a Loebner-level bot, it takes a minimum of 5 person-months. Which is not a huge effort, really. Not for a company trying to build a product.

    When we built Rose, we didn’t use machine learning to design our chatbot. We chose the topics we wanted to cover and wrote conversations for them. Then we added some additional specialized knowledge. Although right now, it’s largely driven by looking at what users have said.

    Machine learning or scripting?

    As I see it right now, there are two different paths available for chatbot developers. One is the machine learning path. It's suboptimal for bootstrapping since it requires a lot of accumulated data to be used correctly. If you lack this data, you’ll face a serious obstacle to this path.

    Another approach, the one that we chose, is scripting. We used ChatScript—a scripting language that allows you to build reasonably competent bots quickly and without huge amounts of data. But most people don’t use ChatScript.

    They use all the other platforms, which are machine learning only. And that’s a slow process. You have to:

    • train your bot as much as you can;
    • release it to people;
    • review the logs (which is a manual process);
    • re-classify the inputs that are wrong (which you have to do on a one-by-one basis and it takes forever).

    Typically you’ll want to combine both methods as you progress further into development. You can use ChatScript to get a bootstrapped product up and running and then complement it with machine learning stuff from the user logs you accumulate over time.

    I think that to get started, scripting is the best approach. You can get something of reasonably good quality out there rapidly. And then you have a user base in place for machine learning.

    The great robot uprising has been delayed. But is it around the corner?

    We asked Bruce to give us a short overview of the chatbot industry and share his opinion about where the market is headed in the future.

    There are many different productive uses of narrow AI: autonomous vehicles, machine-playing chess, digital personal assistants. In the next 20 years, we will see a lot of new products based on narrow AI.

    But small bot developers are ultimately going to be in trouble—because companies with data, companies the size of Facebook and Google, are going to continue to conduct research and machine learning of conversations.

    At some point, they will succeed in getting something plausible out there.

    They haven’t reached that point yet. But when they do, small developers won’t be able to compete.

    It’s true that Google and Facebook open-source a lot of things. But, ultimately, if somebody has truly cracked the ability to have these conversations more universally, I don’t know if they would release that. They would prioritize commercializing the product and making money off of it.

    You’d also certainly expect the chatbot world to be used in virtual and augmented reality a lot, in large measure because you don't want to intrude into users’ visual space with the GUI. You can, but it's not an effective interface.

    In a world where you can have some sidekick just at the edge of your peripheral vision, it can talk to you—because you can use both vision and speech at the same time. So it’s everywhere that voice is improving, and it’s going to be a dominant thing.

    Welcome to the chatbot Wild West

    We would like to thank Bruce Wilcox for his insight and move on to drawing some important conclusions from this discussion:

    1. Chatbot developers will have to step up their game and make better bots—preferably by combining scripting and machine learning.

    2. Modern narrow AI restrictions limit the way chatbots can be used. Linear conversations are fine for simple workflows, but once users go off the beaten track, such bots will have a hard time understanding and serving them correctly. They will be next to useless for complex stuff. Instead of using linear conversations, chatbots will have to move to non-linear—working with context and revisiting things often.

    3. Narrow AI is here to stay, but that doesn’t mean that everybody involved in chatbot development right now will be in business in a few years. When the hype disappears and the world moves on to something new (AR, anybody?), only the strongest will survive.
    Recommended for you