Can You Clone Yourself on Twitter?

TechCan You Clone Yourself on Twitter?

One mad Twitter bot scientist took his tweets, harnessed the power of a neural network, and made a "better" version of himself. Sort of.

One of the first patterns my clone figured out was Twitter handles. @-symbols followed by names. Some were my actual friends. Most were handles it made up, like "radiatorboard" and "filpraphan" and "cmentbirdss."

This was one week into running my personal Twitter archive through a deep-learning neural network, hoping it would spit out something that tweeted like me. I'd made a couple dozen Twitter bots before this, mostly using glorified templates and filtering. But neural networks are modeled after the human brain! Surely they'd capture some fundamental quality of Twitter, like anxiety.

Plus, neural networks are having a moment. They're good at handwriting recognition and predicting human behavior. They compute your Netflix recommendations, they can colorize black-and-white photographs, and they're how Google instantly translates Hebrew into Swedish. They’re probably one of those miracle technologies with a secret dark side. After all, they're also how you generate those images that look like kaleidoscopes filled with spider eyes.

I'm no expert on deep learning, but I do like meddling with powers beyond my control. This was my crack at a Frankenstein's monster, only I could just download nine years of tweets instead of digging up graves.

Talk to DeepDubbs

Curious what Rob's Twitter clone sounds like? You can mention @deepdubbs on Twitter and it'll reply to you.

I collected and built the arcane stack of computer programs you need to spawn these things on a little server on my bookshelf. I did some math. Neural networks learn deep, but they also learn slow: Working 24 hours a day, mine would finish training itself on my archive in three months. If the computer lost power, it would have to start over. No pauses on growing a clone.

As it churned, the software spit out an unfinished brain-in-progress every eight hours or so. I wanted to track how it evolved over time, or even if it evolved, so I rigged up a way to connect the latest clone candidate to a Twitter account I called "DeepDubbs." Which technically doesn’t count as giving myself a nickname.

In its first week, DeepDubbs spat out a few shards of interesting phrases:

"good snake, may be last"


"you drink the art serve," followed by a dead t-dot-c-o link.

Neural networks are great at deducing the forms of things, and DeepDubbs picked up pretty quickly on shortened URLs. It knows how they're supposed to look, just not that they're supposed to point to anything. My clone only shares unopenable tabs.

DeepDubbs' output also tries to blend in with the data that created it, so most of its tweets have fake timestamps between 2007 and 2016, with the occasional mutated dispatch from the year 2812. My clone figured out that in late 2015, I stopped using Tweetbot.

In the second week, it started having longer opinions.

It mused, "our elementary bot and what at both is my own brain."

And then a few days later: "howling poses because they trying to like the joke"

Then, possibly a jab at me, quote: "bummed if not testing to be the computers of my process of having him to make twitter."

Was the bot grasping at meaning, or was I? Was it musing on its own jury-rigged existence, or is that just how I sound in public? A quarter through training, on April 17, a DeepDubbs tweeted "happy birthday, @robdubbin is at least a mascot." My birthday is June 21st, and I kind of think of the bot as my mascot. But at least I got the birthday wishes and not @cmentbirdss.

When the experiment started, it was all so exciting that I'd upgrade the brain as often as possible, even if it only boosted the model's training level by a percentage point. Once that lost its novelty, I'd let it run for a week or more before upgrading, with gains of 10 points or more. Making my clone smarter became sort of a weekend ritual. I told myself I wasn’t playing God. I was more like a professor, who let an algorithm teach all the classes. One time I upgraded the bot and later that same day, it tweeted "just got upd…" like a cyborg bro leaving the cyber-gym. I started to feel like we didn't have much in common.

On May 28th, I upgraded my Twitter clone from 84.8% to within 10% of its training goal—it would finish sometime that week. Literally a few hours later, I moved a speaker on my bookshelf and my hand brushed the power switch on the surge protector it was plugged into. The power died, and my stomach sank as I realized I’d taken a pratfall 25 miles into the marathon. My clone’s development froze at 94.9%, forever shy of what I’d defined as maturity. The surge protector incident will definitely come up when DeepDubbs enters clone therapy.

I always knew my goal of 100% was arbitrary, and grounded in zero understanding of philosophy or computational neuroscience. I just threw some big numbers into a long-haul algorithm and tended the fire until I managed to screw it up. Even at nearly full training, the bot mostly generated nonsense.

It hadn't been online all summer. I figured after three months of learning millions of things per second, it deserved a vacation. But I did have a few questions for my clone, so I spent a few days programming a way for it to answer.

I asked DeepDubbs, "So what do you think this was all about?" And it responded, "i am never strange to hear with me that i am sincerely in line." If that ever shows up as a New Yorker poem, please keep our secret.

I asked, "Are you happy?" And at first it crashed the program. But then I asked again, and it said, "happy like a bruise on that google sex fortune." Which I took as a yes.

After my experiment’s tragic shutdown, I was a little embarrassed to learn I could have done the whole thing in a few days with a better computer. Modern 3D graphics and complex neural networks thrive on similar hardware, so a hulked-out gaming PC or a fancy Amazon cloud computer could have spun me up a smarter DeepDubbs in a fraction of the time. Why had I relied on that little bookshelf server, so slow and prone to human error? Didn’t I want the best for my clone?

Moved to action, I leased computing space on one of Amazon’s most powerful cloud platforms, and set it to work from scratch on a new clone with twice the perceptual breadth of its predecessor. I gasped as training iterations flicked by ten times faster than they had back at homeschool. The new DeepDubbs learned my friends’ handles in a day instead of a week, then started correctly guessing which of my friends knew each other. I watched it branch out from plausible links to plausible YouTube links. After seven days of intensive progress I checked my Amazon balance, burst into nervous laughter, and pulled my latest brainchild out of deep-learning private school.

I never wanted this to become a treadmill powered by a cash furnace. DeepDubbs needed more real-world experience, I told myself, because the most important lessons occur outside the classroom. I took the system I’d built to ask my clone questions, and hooked that up to Twitter so it could talk to anyone. It had, at least in theory, become smarter and more social than ever in its equivalent of a life. I wondered, so I asked, “are you still happy?” After a short pause DeepDubbs responded, “happy to sort ok selfies.” I swelled with pride. My clone was ready to join Instagram.

Read more