Earlier this week, Medium announced a new metric, called the 'applause'. But there are some problems with it, which tie in with a more general concern about how metrics on social channels force us to optimize for the wrong things.
The main problem is the difference between what the public sees and what the author can see, which is also a massive problem with the metrics on sites like YouTube and Facebook.
Here is why:
If you haven't seen Medium's announcement yet, let me very briefly summarize how their new applause metric, using 'claps' works.
Basically, a clap is similar to a 'like' button but you can click on it as many times as you want. So when you see an article that you really like, instead of just liking (or applauding) it once, you can just keep hitting the button and give it as many likes (claps) as you want.
This is actually a rather interesting metric.
We all want more detailed and granular metrics, and this is certainly a very interesting way to get that. And as the author of a post, you can dive into this as much as you want.
Not only do you get to see the total amount of claps, but you can also delve into the data and see how many claps each person gave.
This is pretty cool, as Medium explains:
Since day one, Medium has had a goal of measuring value. The problem, as we saw it, with much of the media/web ecosystem is that the things that are measured and optimized for were not necessarily the things that reflected true value to people. For example, a pageview is a pageview, whether it's a 3-second bounce (clickbait) or a 5-minute, informative story you read to the end. As a result, we got a lot more of the former.
On Medium, we've tried to provide more meaningful metrics. We display to our authors not only views, but reads (i.e., how many people got to the bottom of a post). We calculate time spent on posts and display that for publication owners. And we use all of this in our systems that determine which posts to distribute to more people. The goal is always to be able to suss out the great from the merely popular.
To make this more meaningful, Medium has created the clap function, which works like this:
Just click the ðŸ‘ instead of the â¤ï¸. If you feel strongly, click it more (or just hold down). The more you clap, the more positive feedback you're providing to the author, and the more you're letting us know the story is worth reading. (Only the author can see how many claps you gave them.) Our system will evaluate your claps on an individual basis, assessing your evaluation of a story relative to the number of claps you typically send. All this will help the stories that matter most rise to the top.
This all sounds pretty good, but this is only half the story. The problem here is that there is a massive disconnect between what the public sees and what the author sees.
On almost all social channels, we have a big problem with vanity metrics, metrics that look really cool, but underneath don't really mean what you think they mean.
One of the worst examples of this is the view metric on Facebook (and to a lesser degree on YouTube). We all know that views on Facebook is one of the most misleading metrics in the world, simply because what they define as a view has nothing to do with people actually watching a video.
The bigger problem with this, however, is the disconnect between the view metrics that you can see as publishers, and what members of the public see.
The public view metric is this heavily inflated number, representing any view for any autoplaying video that happens to be in view for more than 3 seconds. So Facebook will tell the public that, for instance, this video has 4.8 million views.
However, privately, Facebook also provides the publisher with much more detailed and valuable view metrics.
For instance, they will tell you how many of those views are longer than 10 seconds (which is still a ridiculous measure); how many are played with sound or not; the average watch time; and the total play time.
These metrics, like the Medium applause metric, provide the publisher with far more valuable and accurate data.
But this also creates a problem, because the public metric is what we use to compare ourselves to other publishers, while the private metrics are what we use to identify whether those views actually hadany real value.
So think about it like this.
Imagine you have two videos with these metrics:
compared tho this:
Which one of these two is the best?
Well, if we look at the public view count, the second video looks more successful than the first video. But privately, we can see that the first video actually worked much better.
This is the problem we have with all the social channels. Because they are showing the public one metric while privately giving us publishers another, we end up with an absolutely hopeless situation, where we are forced to care more about the inaccurate vanity metric than the truly valuable analytics.
It's the same on YouTube. YouTube has long shifted its focus to watched time. This is the primary metric for all internal YouTube analytics used to rank videos. But in public we still get the view count.
Our video discovery features were previously designed to drive views. This rewarded videos that were successful at attracting clicks, rather than the videos that actually kept viewers engaged. (Cleavage thumbnails, anyone?)
Now when we suggest videos, we focus on those that increase the amount of time that the viewer will spend watching videos on YouTube, not only on the next view, but also successive views thereafter.
If viewers are watching more YouTube, it signals to us that they're happier with the content they've found. It means that creators are attracting more engaged audiences. It also opens up more opportunities to generate revenue for our partners.
These changes have so far proved very positive - primarily less clicking, more watching. We saw the amount of time viewers spend watching videos across the site increase immediately, and this positive trend has continued as we've made additional tweaks to our suggestions. Improvements in the stats we measure, such as average view duration, provide a solid indication that the videos we are recommending throughout the site are even more engaging.
Think about how insane this is.
If YouTube knows that 'views' is the wrong metric, why do they show that to the public while keeping the valuable metrics private?
The result, again, is that it forces publishers to optimize for the public vanity metrics.
This has an impact on how we discuss success on YouTube. For instance, I often use examples from YouTube in my articles, because YouTubers are usually at the cutting edge of media. But I too often have to mention 'views' as a way to demonstrate their success.
The reason I do this is because I have no other choice, but I'm painfully aware that 'watched time' is a far more valuable and accurate metric, whereas 'views' is not.
Because of this, all the examples I point to are often picked because of something other than just the views. For instance, I have mentioned Joe Wicks in several of my articles because he is an amazing success overall.
But the problem remains.
When other publishers try to compare themselves to Joe's videos, they can only see the inaccurate view metric. And this leads to so many bad results.
Bad results such as when publishers start to optimize for more views rather than watched time.
On Facebook, we see these awful PowerPoint videos, where publishers create a slideshow of some kind, add some text to it... and then make 100s of them per month.
This is what you get when you optimize for the public view metric (the vanity metric) rather than the much more valuable metrics that the public can't see.
This is now the same problem that we see on Medium. Medium is different in that they are actually trying to provide publishers with more value, and that is great. But they have fallen into the same trap as everyone else.
And, again, the problem is that the public metric that everyone can see is highly misleading.
In public, you can go to an article, and you see this heavily inflated 'applause count'. For instance, here is an article with 14,400 claps.
But this is a meaningless metric, because, remember, people can click on the clap button as many times as they want. If you wanted to give an article 100 claps, you can do that (although most people probably wouldn't go that far).
This is a massive problem, because this becomes the number that publishers will start to compare themselves with. They will look at one article with 14,400 claps and think that is better than an article with only 12,800 claps.
But you don't know that. This number is even worse than the view metric because it's artificially inflated.
Mind you, Medium is also allowing the public to see how many individuals have applauded an article. By clicking on the applause number, you are presented with another page that looks like this:
This is a lot more useful, but it's still not good enough.
First of all, most people will never see this page, because it's a lot easier to just see the first metric.
Secondly, even though this does tell you how many people clapped, it doesn't tell you anything about how they did it.
For instance, you can calculate that the the average is about 11 claps per person, but is that a real number? Maybe most people only clapped once, with only a much smaller group going completely bananas.
We just don't know, and we end up with the same problem as on Facebook and YouTube. We start to compare articles to the inflated metric that doesn't really tell us anything, while the really valuable metric is only visible to each individual author.
I'm reminded here of one of my all-time favorite comics, which is this one:
I just love it, the reason we can't trust 'C' is because it has so much variance in the data that it's completely useless. And it's exactly the same thing with the vanity metrics on social channels.
Medium is still experimenting with this. As Ev Williams commented:
One of the things I think we'll learn over time is how to calibrate what the number means on first glance. It will seem inflated at first, because we're used to seeing a 1:1 ratio. My theory is that we'll adjust to that and get a new sense what it means.
But if we find we need to display other/different meta info, we'll consider that. (It's something the team debated a lot while designing this.)
As I said in the beginning, there are many interesting elements to this, and I will be keeping an eye out for how they might tweak this over time. But, overall, it's a real problem.
We have already seen how vanity metrics have damaged the editorial strategy on other social channels, where chasing the wrong metrics has directly damaged publishers' ability to stand out and create real value.
The public metric should always convey the same level of accuracy and value as those metrics that you can only see privately. As long as there is a disconnect between the two, publishers will keep optimizing for the wrong one... because publishers don't have a choice.
As a publisher, you can't just ignore the public metrics, even though you know they are inaccurate, because the public metric is what your audience and your competitors see and it affects their opinion of you.
We are forced to make the public metrics look good, even though that is often the wrong strategy.
I don't have a solution to this, other than to help you become aware of it. But hopefully this can help you at least look at the real metrics first.
Founder, media analyst, author, and publisher. Follow on Twitter
"Thomas Baekdal is one of Scandinavia's most sought-after experts in the digitization of media companies. He has made himself known for his analysis of how digitization has changed the way we consume media."
Swedish business magazine, Resumé