Sorry, we could not find the combination you entered »
Please enter your email and we will send you an email where you can pick a new password.
Reset password:
 

free

 
By Thomas Baekdal - January 2023

The role of print in 2023 ... is the same as how we think about other formats

This is an archived version of a Baekdal Plus newsletter (it's free). It is sent out about once per week and features the latest articles as well as unique insights written specifically for the newsletter. If you want to get the next one, don't hesitate to add your email to the list.

Welcome back to the Baekdal Plus newsletter. Today, I have two stories for you. First, a story about the future of print, and then a story about how not to use AI.


What is the role of print in 2023?

One of the questions I get a lot from publishers is what is the future for print? The answer depends on how we define this. Are we talking about print as a way to maintain the past audiences, the audience segment who now average around 70 for some publishers, and who are unlikely to ever switch to digital? Or are we talking about the next step for print? As in how we define print if we were to use it to attract a new audience?

In my latest Plus article, I talk about this second step. About how we define print, the advantages and disadvantages of print, in comparison to all the other formats, and how that creates a role for the future where print may be a product.

It's not going to be like the print we used to know, but neither are the other formats we talk about all the time (like podcasts and newsletters).

But take a look at: "What is the role of print in 2023?"


How CNET shot themselves in their AI foot

If you have read this newsletter for several years, you will know that I have mixed feelings about the future of AI. On one hand, I'm excited about it, and I do think that AI will play a much bigger role in the future (obviously).

But, on the other hand, I'm also a media analyst who cares about journalism, and most of the things I see around AI are not only detrimental to journalism, but more than that it's completely violating what it means to be journalistic.

The latest example of this was from CNET:

The story starts on December 12, 2021 (well... kind of), when Jackson Ryan wrote a very good article talking about how AI is very interesting, but it cannot do journalism.

I agree with this article, and I have shared a similar sentiment for a while now. As Jackson writes:

ChatGPT won't be heading out into the world to talk to Ukrainians about the Russian invasion. It won't be able to read the emotion on Kylian Mbappe's face when he wins the World Cup. It certainly isn't jumping on a ship to Antarctica to write about its experiences. It can't be surprised by a quote, completely out of character, that unwittingly reveals a secret about a CEO's business. Hell, it would have no hope of covering Musk's takeover of Twitter - it's no arbiter of truth, and it just can't read the room.

And I think most people in journalism would agree with this. The role of journalism is not the ability to write words on a page. Anyone can do that, even AIs. No, the real role of journalism is all the other things that we do. We investigate topics, find people to interview, fact-check and so many other things. And none of these fancy AIs can do this.

However, CNET's management had apparently not fully understood this because Gizmodo revealed that, despite the article above, CNET had actually used AIs to write articles for months.

As Gizmodo revealed, CNET was testing to "try out the technology to see if there's a pragmatic use case for an AI assist on basic explainers around financial services topics." And their Editor in Chief said that "the company's goal had been if the AI engine would be able to assist their busy staff of reporters and editors with their job to cover topics from a 360-degree perspective".

And that they wanted to learn if "CNET would benefit from AI content to provide available facts, allowing readers to make better decisions."

Now, as a media analyst, I have nothing against this. In fact, I welcome it when a publisher decides to experiment with new things. That's a good thing, but ... there are three massive problems here.

The first problem is with the use of AI in general. AI is great for some things, but also dramatically lowers the barrier to entry. You don't have to go to journalism school to use an AI to write stories, you don't do any special work, or have any unique skill. Anyone can publish AI stories.

And so, the future prospect of AI driven articles is that of the lowest common denominator. I mean just think about it. If all the big publishers start doing this, not only will we see a future of news where we are just getting flooded with cheaply produced articles (and 'excess volume' is already a major factor contributing to news fatigue), but it also means that no publisher is able to stand out.

We are literally filling the world with noise.

I'm not saying all AI is causing this, for instance, I like what United Robots have done over the past decade, but to use AI to just increase the volume of articles published undermines the entire industry of journalism.

A future where publishers just add articles using ChatGPT is not a good future, even if ChatGPT was good at it. We are adding noise to people's lives. This is the opposite of editing.

The second problem is much more serious. Tools like ChatGPT don't know what it is writing. And because of that, it has no concept of whether something is fact or not.

Now, CNET learned this the hard way because, after Gizmodo's investigation, it was revealed that several of the AI stories published contained factual errors.

This is not surprising at all. Over the past several months, we have seen hundreds of examples of these AI tools just making shit up, so obviously, the articles would contain factual errors.

As Gizmodo put it:

Usually, when an editor approaches an article (particularly an explainer as basic as "What is Compound Interest"), it's safe to assume that the writer has done their best to provide accurate information. But with AI, there is no intent, only the product. An editor evaluating an AI-generated text cannot assume anything, and instead has to take an exacting, critical eye to every phrase, word, and punctuation mark. It's a different type of task from editing a person, and one person might not be well-equipped for, considering the degree of complete, unfailing attention it must take and the high volume CNET seems to be aiming for with its ChatGPT-produced stories.

This is the fundamental problem. AI designed to 'make shit up' cannot be used to create a journalistic output. This is different from the output you see from companies like United Robots, their AIs have a very specific and predefined input of data, which the AI merely rearranges into human-readable text. It doesn't try to combine things, or 'fill in the gap' or just mysteriously add a fact that was never there to begin with.

As editors, we seriously need to remember this. AI tools designed to 'imagine things' cannot be used for journalism. Their very function means that we cannot trust them. And unless you plan to meticulously fact-check every sentence (which would not be cost-effective), none of these tools can be used by us.

However, AI tools that are designed from the start around factual inputs, those are interesting and can be used journalistically. But these two are not the same!

The third problem is about plagiarization.

Tools like ChatGPT are fundamentally designed to take someone else's work (from often hundreds of sources all at once), and then present that as its own. We have a word for that. It's called plagiarizing. And right now, there are several lawsuits of artists suing AI companies for doing just this.

Most of the time, we don't really see it happening. But, when Adobe said it wanted to start licensing AI created images, and when the AI companies themselves said that their images 'transcended copyright' ... well, people like to put that to the test by simply asking the AIs to create images in the style of famous brands.

Here is an example:

These images were made simply by asking an AI to create an image in the style of Disney's Mickey Mouse. And it did.

And here are some of Star Wars and Marvel.

All of these pictures were created by AIs.

Here is the problem. If these pictures were created by a human, as a personal project. This would be fine. It would just be fan-made art. They couldn't sell it, of course, but they could make them.

But that's not what this is. Here we see AI-created images, meaning these are images created by companies, who are using this to attract investors, and have huge valuations and market caps. And so, all of these images are both a violation of copyright and of Disney's trademarks.

The problem is, however, that this isn't limited to just Disney. The reason why these companies are being sued by so many artists is that these AI tools do it to everyone.

And it isn't just happening with images either. Here is an example of AIs taking someone else's writing: "A Writer Used AI To Plagiarize Me. Now What?".

But think about this journalistically. We cannot use this. The AI won't tell us where they got the information or the images from, so we don't know whether the images or the text is usable. We don't know whether something we publish tomorrow might cause you to be sued by the person the text/images were taken from.

This does not question the first amendment (in the USA) or free press. This is a problem of plagiarism. And most of the AI tools we see today are fundamentally designed to be really good at plagiarizing.

But also think about this the other way. If you publish an article about something, anyone else could simply feed your article into the AI, and then ask it to rewrite it. I think it's only a matter of time before that happens at scale. For instance, every time the New York Times publishes an article, there will be hundreds of identical, but slightly different sounding articles written about the same topic by AIs, and then published by other news sites.

This future scares me. It will drive down the value of journalism to zero!

Oh, and what happened to CNET? Well, they have now paused using AI to write stories.


Want to know more?

Take a look at these if you want to learn more about how to better sell magazines and newspapers.


Support this focus

Also, remember that while this newsletter is free for anyone to read, it's paid for by my subscribers to Baekdal Plus. So if you want to support this type of analysis and advice, subscribe to Baekdal Plus, which will also give you access to all my Plus reports (more than 300), and all the new ones (about 25 reports per year).

This is an archived version of a Baekdal Plus newsletter (it's free). It is sent out about once per week and features the latest articles as well as unique insights written specifically for the newsletter. If you want to get the next one, don't hesitate to add your email to the list.

 
 
 

The Baekdal Plus Newsletter is the best way to be notified about the latest media reports, but it also comes with extra insights.

Get the newsletter

Thomas Baekdal

Founder, media analyst, author, and publisher. Follow on Twitter

"Thomas Baekdal is one of Scandinavia's most sought-after experts in the digitization of media companies. He has made ​​himself known for his analysis of how digitization has changed the way we consume media."
Swedish business magazine, Resumé

 

—   newsletter   —

plus

newsletter:
The future outlook of the brand+publisher market

free

newsletter:
Can magazines mix advertising and subscription? And what about password sharing?

free

newsletter:
What happens when you ask an AI to do media analysis?

free

newsletter:
Operational security and the dangers of online sharing for journalists

free

newsletter:
How to think about AI for publishers, and the end of the million views

free

newsletter:
The role of print in 2023 ... is the same as how we think about other formats