Menu
What went wrong with Clippy - the virtual assistant pioneer people loved to hate?

What went wrong with Clippy - the virtual assistant pioneer people loved to hate?

Microsoft’s ‘90s paperclip assistant taught us all about computer conversations -teachings you have to take into account when you design digital humans for a living, as FaceMe’s head of professional services, Simon Grieve, explains

The year was 1997. Titanic had just hit the big screen, Mike Tyson infamously gnawed Evander Holyfield, and the first Harry Potter book (not film, book!) was released. For anyone who remembers, 1997 feels like a lifetime ago.

Microsoft had just become the most valuable company in the world, worth $261 billion, but “Cortana” was still just a meaningless word. And yet, Microsoft was about to learn something crucial about the future of virtual assistants.

Windows 97 launched, and with it came Clippy, the virtual paperclip designed to help you navigate Microsoft Word.

Clippy’s future was neither long nor bright. His designer would later admit, “people f*****g hate him”; TIME magazine named it one of the worst inventions of all time; and when Bill Gates publicly announced Clippy’s retirement in 2001, the crowd gave a standing ovation.

For many, Clippy taught us how to hate conversing with a computer—lessons you have to take into account when you design digital humans for a living, like we do at FaceMe.

But what actually went wrong with the sentient stationary, and what lessons can we all learn as AI-powered digital assistants quickly become the preferred channel for customer interactions?

What went wrong with Clippy?

First, let’s just say that Clippy was a pioneer. As a virtual assistant that would respond to certain context and be continuously available, he carved new ground in the digital assistant space, which is only fully being explored and realised today.

That being said, we can learn more from Clippy’s mistakes than his successes—from a design, conversation development and purely human perspective.

1) A lack of functionality

Critics of Clippy say his real problem was that he was “optimised for first use.”

Clippy’s most popular action was to say “It looks like you’re writing a letter” and offer to help. That may be good if you are writing your very first letter, but if you aren’t (as most users weren’t) it became “infuriating”.

Never underestimate how important it is for a solution to be useful

Simon Grieve, FaceMe

His second most memorable feature? If you were idle at your computer—say, talking to someone in real life or thinking what to write next—Clippy would start knocking on the screen to get your attention. Why it was designed to interrupt is puzzling. It was a cool animation in 1997, no doubt, but one that served no purpose unless you had genuinely dozed off mid-sentence.

What we’ve learned

So, what’s the lesson? Whether you’re designing a paperclip chatbot of sorts or a hyper-realistic digital human wealth advisor, the digital assistant has to have function, first and foremost.

When you’re building digital humans, you’re forced to constantly think: what’s the problem we’re trying to solve, and is a digital human the best platform for solving it? When we can answer both questions, we’ve got a good foundation for creating a highly functional digital human.

One of the biggest smiles I’ve seen on a customer’s face was when a couple completed a transaction with an in-store digital human and a receipt was automatically printed out for them. I think that was the moment they realised the digital human wasn’t just a gimmick, but could do exactly what they asked from start to finish.

Today, you can also build in situational awareness that Clippy never had. If you’re talking to a digital human and look away to speak to your child, the digital human will recognise that and wait for you to re-enter the conversation. And importantly, you can build digital humans today for many more use cases than Clippy ever had.

Overall, never underestimate how important it is for a solution to be useful. Clippy, unfortunately, did.

2) Poor co-design

Clippy was the final result of around 250 character designs iterations. However, even though the Groucho Marx lookalike paperclip was the eventual avatar Microsoft went with, not everyone in the company was on board. In fact, even in the early design stages, Clippy wasn’t the only one at Microsoft suspiciously raising an eyebrow.

One of Clippy’s early detractors was Roz Ho, a Microsoft executive, who recalled the co-design stage of Clippy:

“We did a bunch of focus-group testing, and the results came back kind of negative. Most of the women thought the characters were too male and that they were leering at them. So we’re sitting in a conference room. There’s me and I think, like, 11 or 12 guys, and we’re going through the results, and they said, ‘I don’t see it. I just don’t know what they’re talking about.’ And I said, ‘Guys, guys, look, I’m a woman, and I’m going to tell you, these animated characters are male-looking’.”

Microsoft’s male-heavy leadership team decided to launch Clippy anyway, despite at least half of the world’s population potentially put-off by his eerie gaze.

What we’ve learned

Fast-forward to 2019 and co-design is much more inclusive, because it’s an absolutely essential (and exciting) step in designing believable digital human assistants more people can connect to.

Once an assistant’s functionality is cemented, we start designing how we want the customer to feel interacting with a digital human. We’ll workshop it, deciding what people want the character to look like, sound like, what personality traits they’ll have. Will they be authoritative or cheeky? Will they tell jokes, and how cheesy will those be?

We call it experience curation—once the functional stuff is taken care of, it’s this co-designed personality that makes a difference in how the user feels when talking to a digital human. One of the things I love about what we do is that we genuinely have the opportunity to make people smile when using our technology - that’s actually quite unique!

3) A lack of humanity

“The reason I think people hate him is not because of what Clippy is, but how Clippy acts,” the paperclip avatar’s illustrator, Kevan Atteberry explained.

Stanford professor and consultant at Microsoft, Clifford Nass, went one step further when talking about how Clippy interacted with users.

“Clippy made it clear he was not at all interested in getting to know them,” he explained. “No matter how long users worked with Clippy, he never learned their names or preferences.

“If you think of Clippy as a person, of course he would evoke hatred and scorn.”

It’s a lack of any emotional connection that was arguably the final nail in Clippy’s coffin. Not solely did he offer little functionality, not only did he find it impossible to give a warm first impression, he essentially also couldn’t build any rapport with a user.

He lacked any humanity, even the little you might expect from a paperclip.

What we’ve learned

In many ways, this is a problem you see with chatbots at the moment. To give an emotional reaction to a user, a chatbot can use an emoji and that’s about as good as it gets. Because of this, they’re dramatically struggling to create friendly, human-like interactions.

Conversely, it’s interesting to pick through anonymous data and see that people often talk to digital humans like humans. People tend to use the name of the digital human a lot, and they’ll say things “tell me about yourself, where are you from, and what do you do in your spare time”.

If you had asked any of these things of Clippy, you would have struggled to get anything more than an unfortunate eyebrow raise and an offer to help you write your 70th letter that day.

4) Making the wrong type of connection

Clippy was never designed to annoy, but to be genuinely helpful to people still getting used to 20th-century word processors. Despite that, he was a beautiful failure that, today, has reached meme status.

Clippy’s remembered as something that caused frustration and annoyance, but his memory lives on 22 years later. Why is that?

It’s because he made an emotional connection—he just made the wrong kind.

A lot may have changed since Rose left Jack in the middle of the icy Atlantic Ocean, Mike Tyson shocked the world with his gruesome fight tactics, and Harry Potter began his fight of good against evil, but our basic human responses to such events have stayed the same.

We cry when tragedies happen, we wince when something inhumane happens, we get attached to characters we relate to. We get irritated when lecherous-looking paperclips rub us the wrong way.

Emotional connection is the reason events from 22 years ago have stayed with us. It’s the reason Harry Potter products still sell by the shed load. It’s the reason we don’t remember anything but annoyance with Clippy, because that’s the only emotion he made us feel.

What we’ve learned

Today, we are designing digital humans with emotional resonance in mind, so people begin making positive memories.

Twenty-two years from now, FaceMe would love customers to remember the emotional connection they had with a digital human and how it made them feel, whether it made them smile, laugh or just wowed them by doing something incredibly well.

It takes functionality, co-design, humanity and emotional connection.

So thanks, Clippy. For all your faults, we wouldn’t know as much about the psychology of human-to-computer interactions if you’d never seen the cruel light of day.

Sign up for  CIO newsletters for regular updates on CIO news, career tips, views and events. Follow CIO New Zealand on Twitter:@cio_nz


Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags AIUIcustomer focuscxchatbotFaceMeuser focusvirtual assistantDXClippy

More about AtlanticBillFaceMeMicrosoftRoseTwitter

Show Comments