In today's newsletter, we are going to talk about robot journalism and other robotic things.
My latest podcast is out, and it's about robot journalism, where we take you through all the major elements of what robot journalism is about and how that plays into the newsroom.
And I say 'we', because I have a special guest joining me on this one. Her name is Hannah, and ... well... she is not a human. But you will just have to listen to it to hear more ;)
This podcast was so much fun to make, because not only did I need to do a lot of research (as I always do) to define the episode, but I also spent about a week and a half playing around with different forms of tools to make Hannah come to life.
But, I won't spoil the episode for you here. Instead, put on your best shoes, go for a nice refreshing walk, and listen to it on the way.
Listen here (or in any of the podcast players): Episode 012: The Future of Robot Journalism
Another topic I have talked about a lot lately is the trend for audio, and what we see is more and more publishers opting to use robots (aka text-to-speech) to read the articles.
So, is that a good idea?
Well ...first of all, there are many different approaches to creating audio. At the highest end of the scale we have newspapers like Zetland, that I wrote a featured article about here.
At Zetland, they do not only make all their articles available as audio, they have defined their entire editorial focus around integrating audio in one of the best ways I have seen so far.
And one of the reasons it's so good is because their own journalists read and record their articles.
This is great for a number of reasons.
First, it forces the journalist to think about audio as they are writing the story, which just makes the story work a lot better as audio to begin with.
Secondly, by having journalists read their own stories, there is a form of 'ownership' that really comes into play when you listen (or read) the articles. This creates a style of journalism that just feels more authentic, more cared about, and more real.
But more than that, it also creates a better connection between the story being covered and the journalism that goes into it. For most other newspapers, you feel there is a disconnect here.
If we go one step down from this, we find another form of narrated journalism where the publisher has instead hired professional voice artists to record articles for them.
One example is what we see with the app Audm, where they are using professional narrators to read aloud each story. One example is this one from the New Yorker (which you can listen to via their website).
The quality of this is quite good, or even quite remarkable. And the whole thing reads like a very good audiobook.
But you don't get the finesse that we see with Zetland. Specifically, because the journalists aren't involved in the audio production, some of the text doesn't quite fit when read aloud. The sentence structure is a bit weird for voice, even though it works perfectly well for text.
Secondly, while the narration is extremely good, you feel distanced from the story. You don't feel like either the journalist or the publication behind it was really involved. So you get a great story, but you lack the journalistic connection.
It's good, but just not that good.
But then, one step further down, we also have robot audio, or automatically generated audio via text-to-speech engines.
The advantages of doing this automatically are many. While both the New Yorker and Zetland can produce their audio manually, a big part of why they're able to do this is because they write much less content. At Zetland, for instance, they focus on writing two really good articles per day, whereas at many other newspapers, we might see them producing 100 articles per day. Manually recording that many every day can very quickly become problematic.
So several publishers have now turned to automation to do this for them, and ... well ... it's tricky.
On one hand, the quality of text-to-speech engines is quite good, and we have come a long way in just a few years from the purely robotic sounding voices.
One example is my podcast above, but another example is a startup called SpeechKit, which several publishers in the Nordics (where I'm located) are using.
Here is a sample of what it sounds like when I asked it to read the start of my previous podcast about 'news fatigue'.
The quality of this is surprisingly high. It still sounds a bit 'flat', but the way machine-learning can now produce audio is getting really close to normal human narration.
So the technical innovations here are quite impressive, and it's easy to see why publishers are interested in this.
However, we also run into the same problems as before. Yes, we are now getting to a point where this technically works, but that doesn't mean your journalism works for audio to begin with.
For instance, most news stories are simply too shallow or too text focused to actually work as a good way to tell people a story using voice. It also takes away your distinction, which is a problem even before we do this.
If you have five newspapers all using this technology in the same way, you just get five news stories that all sound the same. You are not going to get any real growth because of that.
So, I'm not that impressed by this. I love the technology and the potential, but I hate how most publishers are just using it because they don't want to invest in doing something better.
And if the history of changes in the media has told us anything, it's that just dumping one format into another format never works out that well.
We need more than just a story read out loud. We need a journalistic focus, an editorial engagement, and a connection that makes you special.
One area where I do find this to be very interesting is with the future trend of 'on-demand news'. For instance, imagine being able to say "Hey Guardian, what is going on with British Airways?"
The Guardian could then use machine-learning to pick out the relevant information, put that together automatically, in a focused and personally relevant way that matches each listener, as a form of robot story, and then use technology like SpeechKit to read it back to people in real time.
That level of 'on-demand' journalism, where you are not just getting random 'mass-market' headlines, but news created on the spot and just for you, and then read to you if you are on a mobile device ... that's interesting.
And I predict that, in the future, this form of news will become a much bigger thing. And we can only do this by turning to robot journalism.
The key element here, however, is to design an experience for it, which is true for all forms of robot journalism. It's a new approach to news, and not just a cheap way to not spend any time or money on it.
A look at the trend of brand+publisher, and the future for epaper
Asking an AI to do media analyst, and what does it mean when social becomes content focused?
It's tempting to just take a picture of your desk, but be mindful of what it might reveal
A guide to AI for publishers, the end of a million views, and what read metric is best?
Depression is impacting all level of news, from the journalists, the audience, to the businesses.
Founder, media analyst, author, and publisher. Follow on Twitter
"Thomas Baekdal is one of Scandinavia's most sought-after experts in the digitization of media companies. He has made himself known for his analysis of how digitization has changed the way we consume media."
Swedish business magazine, Resumé