An intervention for bot developers.
📅 March 18, 2016 in Essay
A bot will only know and learn as much as it is created to know and learn, and so those who create bots also need to know and learn.
— Joshua Decter (@joshuadecter) June 6, 2017
TLDR: Scroll down to the links section to learn how to make bots that don’t suck.
Note: This article was expanded and slightly modified on April 17, 2016.
I don’t know if you noticed, but there’s a lot of hype around bots and something called conversational interfaces.
Magic, an SMS-based buying assistant, was the most coveted company at YCombinator’s Demo Day in March 2015, ultimately raising $12 million from Sequoia. In the following months, GoButler raised $8 million from General Catalyst for an almost identical service, and Operator announced it had raised $10 million from Greylock. Suddenly, everyone was talking about the battle for tech’s next frontier.
(…)
Pavel Durov announced the expansion of the Telegram Bot Store and Ted Livingston staked out Kik’s claim to be the WeChat of the West. By the end of the year, Slack had announced the Slack App Directory, supported by an $80 million fund to fuel the growth of the ecosystem, and Google was rumored to be developing its own chatbots.
via techcrunch.com
So yeah, there is definitely hype.
And while the hype will most certainly die down, a lot of the new (and much improved) technology will stay. Are we going to get our news and weather through a kik bot? I don’t know. But right now, there’s a lot of developers making all kinds of different, sometimes even useful bots.
Bot developers, consider this an intervention.
Beyond the Three Laws of Robotics¶
Regular readers of this irregularly updated blog will know that about seven months ago I started a few projects centered around making non-malicious online bots, mainly Botwiki and Botmakers.
When I created the Botmakers Slack group, one of the first things I had to do was to create a Code of Conduct. Now, this is not an easy task as it is (for that reason, we have a dedicated channel where anyone can openly question, discuss and suggest improvements to the rules we all agree to abide).
But an online group where people create something that can in turn interact with other people (and other bots!) poses an extra challenge of its own. That’s why, immediately after compiling what I hope are good guidelines for an online community, I decided that our Code of Conduct should also apply to the bots created by the group’s members.
This actually makes perfect sense. Here’s a little secret, my botmaking friend: You are the bot.
Towards More Humane Tech¶
There’s already a huge push towards making the tech industry more diverse, and it’s going to be even more important once these automated “digital assistants” get even more mainstream:
Siri found me 15 places to get a burrito in South Philly after 10pm. Siri found me three videos and five articles when I asked it how to roast a chicken. Siri even gave me tips for winning a fistfight.
But Siri had nothing to offer when I asked for help with rape, sexual assault, and sexual abuse. No resources. No comfort. It didn’t even bother to do a web search.
via medium.com
Here’s another great article highlighting more general problems with writing computer algorithms:
But in another context, user feedback can harden societal biases. A couple of years ago a Harvard study found that when someone searched in Google for a name normally associated with a person of African-American descent, an ad for a company that finds criminal records was more likely to turn up.
(…)
He says other studies show that women are more likely to be shown lower-paying jobs than men in online ads. Sorelle Friedler, a computer science professor at Haverford College in Pennsylvania, says women may reinforce this bias without realizing it.
via npr.org
As exciting it is to be part of a new trend, people participating in it also have a huge responsibility.
What we’re witnessing could be the early days leading up to the future where machines and the way we interact with them — and they with us — are much more humanized, and much more entrenched in our society. But all we’re doing is taking our understanding of what being and interacting with a human is and trying to write programs and dialogs based on that.
Also relevant: We don’t know how to build conversational software yet
Alright, what now?¶
There is no one answer to how to make a bot that’s friendly, useful, and never embarrasses its creators. It really depends on the particular bot and what its purpose is.
For example, here’s a Slack bot that opens the door for you.
The bot knows a few hard-coded phrases, so as long as the creator is a nice person, the bot has a very low chance of offending anyone.
With bots where you have complete control over what they say, it’s all about understanding who your audience is and adjusting your language accordingly. The best you can do here is to test your bot with a diverse audience and be very open to the feedback you will receive.
Some bots crawl various data sets and post what they find. With these bots, it really depends on the particular data set, but in general, since you don’t have complete control over your bot’s output, you will probably want to review the data before using it, and keep an eye on your bot’s output. Adding a simple word filter, if necessary, won’t hurt.
And then you have bots that rely heavily on input from its users, like the Neon Clock Tweet Bot, or the ill-fated teenage Twitter bot Tay.
The very least you’d have to do is to add a basic filter, so that your bot doesn’t post tweets with offensive words. But as I myself learned through writing some creative regular expressions to block the N-word when my Detective game somehow started getting popular on 4chan, people will find a way around any obstacles.
You will definitely want to add a more advanced filter, for example, based on the sentiment of the input.
Here’s another example from my own experience: Eddbott, a work-in-progress Twitter-based multiplayer Tamagotchi, can detect when you’re trying to feed it through uploading an image. But it will refuse to “eat” it, if you accompany the image with an insult.
@eddbott can detect if a tweet contains an image of food … and if the accompanying message is positive, it will “accept the food”, which will decrease the eddbott.stats.hunger. And if, for example, you tweet an image of a slice of cake and say “Choke on this, you bastard”, it won’t accept the food.
So really, it’s all about creative thinking, listening to feedback, watching what your bot says, and learning from other people’s experience.
Here are some more specific suggestions:
- join the Botmakers Slack group to share your experience, ask questions, and get feedback
- read through helpful articles on Bot Ethics
- check out some of the tools people made to keep their bots in check
- reach out to people outside of your usual social circles for feedback and suggestions
- be very open to constructive criticism
Here’s hope for a bright future overrun with friendly bots!
Originally posted on fourtonfish.com.
Latest from the blog

Eight years of Botwiki, and 2,000 bots
Happy birthday, Botwiki!

Cheap Bots, Done Quick suspended, this time for good
So long, and thanks for all the bots.

Twitter shutting down free access to their API on February 9
I keep saying it's an end of an era quite often these days.