Apps  Contact  Seminars 

The Tipping Point – The “Stickiness” Factor

by Amrinder Arora
November 22nd, 2010

Tipping PointIn the “Tipping Point”, Malcolm Gladwell allocates a generous portion of the book to the quality of “stickiness” – does your marketing message stick?  The premise is that small, subtle changes to your message (or website, or product) can make a big difference in its perception.  The book talks about social psychologist Howard Leventhal’s classic “Fear Experiment“, in which 2 versions of tetanus booklet were shown to the university students.  One version was a “high fear” version (containing gory pictures) and the other one was a more objective version containing same factual information but without any gory pictures.  The “high fear” version provoked much higher initial response rate (expression of how many people wanted to get vaccination) than the low fear version, but when it came to actually getting the shots, both versions evoked a pathetic 3% response rate.  However, a simple change in booklet (including a map of the campus with health center circled and times shot were available) tipped the vaccination report to 28%.  This changed the information in the brochure to practical and personal action initiating information, rather than just an academic exercise.

This is not the same as saying that small changes can accumulate to make a big difference (although that is true as well).  Gladwell’s message is that the small changes (even by themselves) can make a big difference.

Applying the principle of stickiness

Having bought into the stickiness factor, question becomes how to apply this to your actual problem – product, blog, website, presentation, etc.

Product

Courtesy Jeff KubinaOne of the known best practices in product design is called the DYNO (Disturb Your Neighbor / Officemate) routine.  The basic idea is that when you finish doing some activity, go ask some neighbor to come take a look at your product (and possibly trash talk it).  This applies whether your “product” is a software, a hardware, a literary article or a kitchen.  This also applies whether your neighbor is a software developer, the receptionist, the CFO, the senior staff scientist or the sales guy.  The kinds of feedback that we can get simply from our neighbors goes significantly beyond what we might be able to get from a very thorough (and possibly expensive) user test.   Back in the days when I was a software consultant at Verizon, I helped rewrite a portion of the billing logic using WebLogic integrating with IBM mainframe application.  After getting some output snapshots, just out of pride, I sent them to 4 unfortunate “neighbors”.  Besides learning that this is simply not the “Verizon” way of doing things (that too without a full QA supported UAT, PAT, SAT, DAT, RAT and 700 other Ts – imagine that!), I also learnt that: (i) my app blew up if there were exactly 99 records, (ii) my app was retrieving 10 records, but showing only 9, (iii) the “Bill” icon was actually the invoice icon, and most importantly that (iv) my neighbors didn’t really mind if I didn’t bother them too often.

In a more mature setting, it is still very possible that you find nuggets of gold from your neighbors testing your product, and it is very likely that all your neighbors will point out different nuggets.  We as human beings are sufficiently different to have different reactions to the same input.  And yes, your lunch buddy or your neighbor’s lunch buddy who just happens to be in the vicinity when you finished your product qualifies 100% as a neighbor for the DYNO routine.  Diversity trumps profile or interest by a significant margin.

In “Glimmer”, author Warren Berger makes a very similar point when talking about Sam Farber’s company OXO Good Grips.

..so many of the company’s products can be traced to someone asking why a common device couldn’t be made a little bit better or why an everyday task couldn’t be made (a little bit) easier.

Websites (and Web Applications)

As a direct application of Gladwell’s stickiness principle, in Steve Krug’s legendary book “Don’t Make Me Think“, every change appears to be a small change.  The material is the same, the flow is roughly the same.  Still, the impact of each change is quite significant.

The first version of the BizMerlin resource project assignment report showed data essentially in the following table format.

User Project Assignment - version 1

The very next version of BizMerlin (I believe v3.0) had this slight improvement to the user project assignment report.  Essentially, when you would hover over a project or a resource, it would highlight all users and projects that that user (or project) are allocated to.  Looking at the picture below, hovering over “Cloud Computing”, we can see that Andy Jones is assigned to it 100% and Don Lee and Melissa Peters are assigned at 50% each.   Simple, subtle change that saves valuable time during each resource allocation review meeting.

User Project Assignment - version 2

Presentations (and Brochures)

However, if there is one place where the stickiness factor applies more than others, it is clearly in case of presentations and printed materials like brochures.  I believe this is due to the fact that with presentations and brochures, the user (the reader, the ideal passer by) has exceedingly short duration during which the material must seize the initiative and stick.  If the moment is gone, it is gone.  With physical products and software applications, the user may be fortunately/unfortunately locked in for a little bit of time, but with a presentation, the speaker usually has less than 2 minutes to captivate the audience, or it will zone you out and decide to work on presentations of their own – even if they have to sit through yours.

Testing the stickiness factor using A/B testing and other techniques

A/B testing (also called split testing) is a marketing method used to test the effectiveness of two options – kind of like when an optometrist asks – “Is A better, or is B better?”  The positive aspect of this method is that it is fairly simple to use and fairly simple to deploy.  Google Adwords even allows a/b testing as an “easy” version within website optimizer.  The negative aspect is that the insights to draw from the results of the A/B test are far from straightforward. While it may be true that version “A” of your product is better than the version “B”, it may be harder to draw insights that apply to other circumstances (other products, etc).  The hidden challenge is that the different versions of website (or product, or presentation) do not evolve in a way that are amenable to local optimization – in other words, a much better of product cannot be created by making many “positive” changes to a product, rather, it must go through some negative changes as well, on the path to a significantly positive change.  One of the early leaders in A/B testing was Amazon.  If you haven’t yet, you should read Brian Eisenberg’s Amazon.com analysis here.

<a href=”http://www.amazon.com/gp/product/B003H4RAO0?ie=UTF8&tag=standwisdo-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=B003H4RAO0″>Glimmer</a>

Enter Comments


Tags:

2 Comments to “The Tipping Point – The “Stickiness” Factor”

  1. Excellent analysis! A different book “Made to Stick” talks about many of these principles.

  2. Thanks Greg. Yes, I love “Made to Stick” and have a review of that as well.