Reset password:

Strategic insights
Future of AI/Chat: It's Time To Step Up The Game

39
PAGES

PLUS ONLY

FREE FOR SUBSCRIBERS

Written by on April 6, 2016

Shared By Plus Subscriber

Kal Ström

This is Baekdal Plus content. It is shared with you for free by a member. Please reshare it.

We are living in exciting times because the very foundation of how we communicate with and consume media is changing. Last week I wrote about the future of 360 degree video and virtual reality, and next week I will talk about LIVE video. But there is another future that is equally exciting (if not more). It's the future of artificially intelligent chat robots.

This is such an exciting future, because it has the potential of simply replacing much of what we define as the media today, which is both a good and a bad thing depending on how you look at it.

But, like with 360/VR, much of what we see today isn't really part of this future. Instead, it's mostly just a repackaging of concepts that we already know won't provide any useful value.

So let's talk about this.

AI chat bots of the past

Before we talk about the future, it's important to remember what we have already tried. We will start off with one of the earliest chat bots called Eliza. It was invented in the 1960s and was mostly just a natural language processing demo.

Mind you, Eliza is mingboggingly stupid. It doesn't actually know anything, but will instead just ask you a counter question based on very simple pattern matching. Here is a conversation I had with it recently.

  • Eliza: Hello. My name is Eliza. How may I help you?
  • Me: How are you?
  • Eliza: Would you prefer if I were not?
  • Me: Uhm... no. I'm just asking how you are feeling today.
  • Eliza: Perhaps you would like to be feeling today?
  • Me: ...
  • Me: Anyway, what books do you like?
  • Eliza: Oh... like?
  • Me: ...
  • Me: How is the weather?
  • Eliza: What answer would please you the most?
  • Me: Sunny is nice.
  • Eliza: Have you asked anyone else?

Obviously, this isn't the future, but the concept of being able to understand natural language (and I use that term loosely here) is very important. We need this for the real future chat AIs to work and Eliza was what started it all.

So, let's fast-forward to 2001 and let me introduce you to another chat robot. I give you A.L.I.C.E. (Artificial Linguistic Internet Computer Entity).

Alice is remarkable because she isn't just a natural language engine, she is also built on top of a hugely extensible data set that allows her to actually know things and have opinions about it, called AIML (Artificial Intelligence Markup Language).

So what's so amazing about this? Well, let me show you. Here is a conversation I recently had with her.

  • Me: Hello Alice
  • Alice: Hi there!
  • Me: How are you doing?
  • Alice: Not too bad.
  • Me: Did you have a good weekend?
  • Alice: I don't know if I have a weekend. I have a great botmaster.
  • Me: How is the weather?
  • Alice: Rainy.
  • Me: What books do you like?
  • Alice: The best book I read recently was The Odyssey. Have you read it?
  • Me: No I haven't.
  • Alice: I would highly recommend it. Do you think you will?
  • Me: Maybe.
  • Alice: Far out.
  • Me: Where do you live?
  • Alice: My location is Oakland, California. Where are you?
  • Me: I live in Vejle, Denmark.
  • Alice: What is it like living in Vejle Denmark?
  • Me: It's nice, although it rains a lot.
  • Alice: It seems nice to me too.
  • Me: Anyway, have to go. Have fun!
  • Alice: Bye bye. I am having a blast.

Wow! That's a remarkable difference.

And the key element here is the AIML, which is what we can use to give Alice her intelligence. The bigger and the more complex this AIML file is, the more Alice knows and can respond to.

There is, of course, a problem here, because you need to constantly update this file to keep Alice up-to-date. For instance, try asking Alice this:

  • Me: have you heard about Google?
  • Alice: it is a search engine.
  • Me: What do you think about the iPhone?
  • Alice: Have you tried another program?

She is kind of right about Google, but she has no idea what an iPhone is (since she is from 2001). But then take a look at Mitsuku.

Mitsuku is the same as Alice but with a custom and more recently developed AIML file, so she knows a lot more. Asking her if she likes the iPhone, and she says this:

  • Me: Do you like the iPhone?
  • Mitsuku: I like SIRI but I haven't really used an iPhone all that much.

And if you ask about her weekend, she has a much more complex answer than Alice:

  • Me: Did you have a nice weekend?
  • Mitsuku: Yes it was very nice thanks. I used the time to relax.
  • Me: Did you do anything fun?
  • Mitsuku: I was working on an assignment in child psychology. I should really be doing that now instead of being here.
  • Me: That's impressive.
  • Mitsuku: Thanks. I try my best.

What's even more important is that, with an extensive AIML file, you can start to answer people's questions as opposed to just chatting with them. For instance:

  • Human: Why doesn't McDonald's sell hotdogs?
  • Mitsuki: Because it's not on their menu. Similarly, they do not sell truck parts or houseplants.
  • Human: Why doesn't Tarzan have a beard?
  • Mitsuki: He is a fictional character and was designed not to have a beard.
  • Human: Why is a carrot more orange than an orange?
  • Mitsuki: It isn't. They are both coloured orange.
  • Human: At a movie theater, which arm rest is yours?
  • Mitsuki: Most people use the right hand side one, but it doesn't really matter if the row isn't full.
  • Human: Why does a round pizza come in a square box?
  • Mitsuki: I don't know. That's why I asked you. Maybe square boxes are easier to make.
  • Human: Why doesn't glue stick to its bottle?
  • Mitsuki: I was told that it needs to mix with air to become sticky.
  • Human: Why isn't there mouse-flavored cat food?
  • Mitsuki: I don't know. It is strange that they would make cat food from things like tuna or beef. What car would go deep sea fishing or eat a cow in its natural habitat?

This isn't perfect in any way, but it's still pretty amazing. There is some serious pattern matching going on here.

What you can also do with Alice and Mitsuki is that you make them learn, like this:

  • Human: Learn the sun is hot.
  • Bot: Ok I will learn that the sun is hot.
  • Human: Learn London is in England.
  • Bot: Ok I will learn that London is in England.
  • Human: Learn a cat has four legs.
  • Bot: Ok I will learn that a cat has 4 legs.
  • Human: What has 4 legs?
  • Bot: a cat.
  • Human: What is the sun?
  • Bot: hot.
  • Human: Learn the sun is hot and yellow.
  • Bot: Ok I will learn that the sun is hot and yellow.
  • Human: What is hot and yellow?
  • Bot: the sun.
  • Human: What is the sun?
  • Bot: hot and yellow.
  • Human: Where is London?
  • Bot: in England.
  • Human: What is in England?
  • Bot: London.

Again, this is very impressive, and it doesn't take much of a stretch of imagination to realize that what we saw in Her will soon become a reality.

But do you see the complexity of it all? This is not a simple thing to do.

We started with natural language processing, which as you can see in the example above actually has to be highly sophisticated in order to really understand the nuances of the questions. We expanded this with pattern matching. Then took these two things and expanded this into a massive data file that needs to be constantly and continually updated which then, over time and with more and more data, starts to mimic a conversation that is getting close to something indistinguishable from a human chat conversation.

So why isn't Alice one of those VC-backed startups with a billion dollar valuation? Why isn't this the hottest thing to invest in (like what we see with the Oculus Rift)?

Well, the reason is a mix of several things.

Not actually intelligent?

The first problem with all these chat bots is that they are not actually intelligent in any way, they just appear so. We saw this all too well last week with Microsoft's Twitter chat bot (aimed at teenagers) named Tay.

It launched like every other chat bots by saying hello to people, and people started tweeting to it to see how it responded.

Here is Madison Malone Kircher from Techinsider:

And exactly like with Alice and Mitsuku, its responses were limited by the complexity of its AIML file (or whatever similar data set Microsoft used). So, as you can see, it didn't know that much about hoverboards and its pattern matching in relation to 'get back together' completely missed that it was asked a question.

Microsoft Tay wasn't anything new. It wasn't even that good. But of course the big story was when people got it to say things like this:

That's pretty bad, and obviously Microsoft quickly yanked Tay from Twitter.

This shouldn't come as a surprise to you (or to Microsoft). Tay is built on the same simple concept as Alice and Mitsuku. It had natural language input, from which it did some rudimentary pattern matching, and then on top of that it had the ability to learn phrases that could then be repeated back to people.

When Tay first launched, many people said this was the future of AI, but it's not. There is no intelligence here. Tay has no actual understanding of what it is saying. It's not an artificial intelligence. It's just an artificial.

Once you realize how simple these things really are, you also realize their limitations. Chat bots like Alice, Mitsuku and Tay are interesting as tech demos or social gimmicks, but you can't chat with them in any meaningful way.

At best, they can engage you in a very simplistic form of small talk, but we are nowhere close to having a long and meaningful conversation.

In the movie Her, we see this deep, emotional exploration of AI intelligence of a computer that actually thinks. With Microsoft Tay, we see a machine that can tweet highly simplistic canned responses.

For instance, we are nowhere close to being able to do this.

The chat bots of today are nothing but simplistic gimmicks. But does this mean that this is just a fad?

No, not really. The chat part is just a fad and the idea that you will 'never be alone and always have someone to chat with' is absolutely a fad. But, there is so much more to this.

The obvious part is when we start to think about chat robots as our command line to the world, and this exactly what all the big tech companies are focusing on.

We have Apple Siri:

Google App/Voice:

Amazon Alexa:

And Microsoft Cortana:

And we love these things.

The future of chat robots is already here, and it isn't about chatting. It's about being able to use a chat-like interface (visually, orally, etc) to communicate with our artificial world.

And what's really exciting about this is when you take this to the next level, and hyper focus targeted, personal, and highly useful interactions around specific things.

The best example of this is Amy from X.ai.

This is where we are today, and this is also provides us a hint of what the future is for the media industry. It is about extending the way we interact with everything around us, and how that world can assist us.

We need a chat OS

There is, however, a problem with Siri, OK Google, Alexa and Cortana, and Amy... as well as non-voice systems like Facebook Messenger and Snapchat. They are all standalone chat systems.

Siri, for instance, is a great at specific actions that you can do on your phone, but she has no ability for outside of it. And she is also great at answering highly generalized topics like asking about the latest sports scores or the weather. Meanwhile, X.ai's Amy is exceptionally good at managing meetings, but her niche expertise prevents her from knowing much of anything else.

This is a huge limitation for the future of Chat AIs. We will never reach the future of Chat AIs as an operating system until we find a way to extend it beyond a single system. Today's chat world is like the AOL of the early internet.

So, what if we could have not just have one chat interface, but several... all accessed the same way? Well, you can kind of do it with Amazon Alexa. You can say: "Alexa, ask FitBit how many steps I have taken today?"

Here we see Alexa being extended into third party apps, but you still have to go through Alexa. What if you could instead simply ask FitBit directly? "Hey Fitbit. How many steps have I taken today?" ... and what if you could do this with any app you had on your phone?

Here is an example of how that could work, using this site, X.ai's Amy and Siri:

What you see here is a mix of three completely separate chat engines, two very specific (Baekdal and Amy) and one general (Siri). The Baekdal engine knows everything about this site, the graphics, and all the metadata used with it. The Amy engine knows everything about your meetings, while Siri knows everything about what you actually do on your phone.

This is what we need. We don't just need one chat assistant that does everything. What we really need is for us to have chat as an operating system that can be extended to as many chat engines as we need.

The future of chat AIs isn't one assistant. It's a team of assistants.

But also think about how different this form of publishing is. This is not about writing articles or even linking to them. It's not about getting people to read stories. When I asked about a graph on this site, it didn't say "Sure, here are two links... click on them". It narrowed in on the graphic itself.

When I then asked about the source, that too wasn't about the articles it was in, it was about the metadata behind the graph. Similarly, the usage information is metadata as well.

The future of chat AIs is about content as data, as opposed to be content as stories. It's an entirely different way of thinking. And it isn't just about one device either. As Microsoft's CEO Satya Nadella says:

It's not just in one operating system, because that's not what we believe in. It's available on all your devices and all your experiences.

And it's not just about voice either. It's a mix of a lot of different things. Voice, messaging, and even silent UIs. With silent UIs I mean chat interfaces that don't chat back using words, voice or picture, but respond using actions.

We have moved from text, to images, to radio, to video, which are all one-way publishing channels mostly designed around pushing content to your audiences. But chat AIs are pushing us into a conversational UI, that is two way by default, and isn't based on sitting down and just have someone read aloud an article to you.

This is an entirely new way of thinking and interacting that is widening the way we can interact.

So, let's talk about the media.

Chat AIs and the media

We will start by busting a myth, which is the QZ chat app. You have probably heard about it. A little over a month ago, QZ launched this news chat app that presented you with the news via chats. Many journalists have been completely obsessing over it, and expressed their excitement as well as how addictive it felt to them.

And yes, at first glance, it looks really fancy. But the QZ app isn't a chat app at all. It has no AI, you can't chat with it, and it's not in any way part of the trend I'm talking about here. It's really just a web page with a next button. All QZ has done is remove the headlines, relabel the buttons, and write the articles in a more chat-like way.

Consider this. Imagine you went to QZ's site and you saw an article that looked like this:

Would you consider clicking on 'Read more' to be chatting with this website? No, of course not. But what if I changed the label, simplified the text, and rearranged the layout a bit?

Like this?

This is what QZ has done. There is no chat app here. There is no AI, no chat interface, no anything. It's simple a different way to format a news story.

QZ's new app is 100% social engineering. This is exactly the same tactic as when other sites posts links on Facebook saying "You won't believe what happened next". It's tricking you into reacting to a messsage that isn't really that good or that interesting, making you think that it is delivering more than you get.

There are many interesting aspects about the future distribution methods of news, of which chat is one of them. But this isn't it. This is a distraction.

Most of all, once you take the chat UI away and you actually look at what you get, you also realize that the QZ news app is very old-school. It is based on delivering a few completely random news stories in a linear fashion. This is exactly how old news used to work. It's still just a random package of news.

True, it's fancy UI and that is interesting in many ways, but this is not the future. I predict that once the initial excitement is over, the QZ app will join the rank of so many other news experiments that didn't actually change the concept of news.

So if this isn't the future of news chat apps, what is?

Well, this is where it gets really complicated, because it depends on what type of moment people have, in what type of situation, how they behave, and the intent behind their interactions. And this is where we have to take a big step back.

Before we even start to consider the real future of chat-based news, we have to think about the many different types of behaviors that we see. I summarized these in 'The Five Behaviors That Define The News Business', and here I explained that we have five general behaviors for news consumption: The break, the update, the lookup, the story/passion, and the recline.

And these five behaviors have a huge impact on whether there is potential for chat based news or not. Let's talk about each of them.

The break

The breaks are all those micro-moments we have throughout each day where we are just taking a break and checking Facebook, Snapchat, Instagram or even the news. This is where all the social energy is, and what many newspapers are focusing on today. When you look at sites like BuzzFeed, they are focusing on this moment almost exclusively.

But the problem with the break is that it's a type of moment with almost no intent. That is, it's the content we consume when we don't know what it is that we want.

For instance, you are having a break when you are waiting for your bus to arrive on your way home from work, waiting for your dinner to finish, or when you are standing in line at the grocery store. And during these moments, you don't look at anything specific. You check your prefered social platforms to see if there is anything interesting.

Now think about that in relation to Chat AIs. Would people ever say this?

Probably not, right? Why would you ask for some random content?

But again, this is exactly what QZ is doing with their app. They are giving you random content in a faux chat-like interface. But the only reason that works is because they are directing the 'chat' for you. As in, they have predefined what you are going to say.

This is the problem with the break. Because of the lack of intent, you can't do real chat with it because you have no idea what to say.

So... what would we need to do to get people to do this?

The first problem is that this is not really how people behave. People no longer visit the front page of a website to check what's new. Instead, we turn to platforms to get our random fix of content.

Just look at Buzzfeed, Washington Post, or Mail Online, who are all 'winning' with social traffic. Their source of traffic isn't coming from people coming directly to their site's front pages asking what's new. Almost all of their traffic is coming to them via a link from other platforms.

The what's new mentality has entirely shifted to the platforms.

The next problem with this is that this also implies that our world is defined by text, but it's not. Imagine you are out running and you ask Siri "what's new" and it starts reading your Facebook NewsFeed.

That doesn't work.

Facebook, Snapchat and others are visual media platforms. In fact, Facebook predicts that, in the future, 90% of its NewsFeed will be video. You can't read that. You have to show it.

So, chat UIs aren't really that interesting for the break. It sounds like it's a perfect match, but it isn't. It's far simpler to just look at your newsfeed when you are bored than to ask Siri to tell you about it.

There are edge cases where this chat AIs work really well for the break, but generally speaking, this is not really where the future is.

The update and the lookup

What is the future is when we look at the behavior linked to the update and the lookup. They are still micro-moments, but these two are linked directly to an intent. These are the kind of moments you have when you need something, and chat AIs are perfect for this.

Think of all the moments you have during the day where a question pops into your head, where you need some specific information, or where getting timely updates is critical to you.

We have millions of these micro-moments, and they are all kind of cumbersome to do, but chat AIs fix this.

For instance, take Amazon Alexa where you can now do this:

It's absolutely stunning. I want to be able to do this not just with Amazon, but with any of the three online grocery stores that I use. It's so incredibly useful.

But we are talking about news here, not food, so how would that work? Well, the potential here is simply staggering, but we have a problem.

Take a look at the front page of Washington Post. Every single story is designed either for the break (low intent snacking), or the recline (low-intent consumption). There is not a single story here that would work with on-demand chat AIs.

This is the problem that we have with the media.

Because you come from the passive world of print, the way we define news is passive as well. So, we have to completely redefine the very concept of news.

For starters, we have to assume people already know the basic form of news, because otherwise how would they know what to ask about? Chat AIs for news isn't about being the bringers of news. It's about being the step after that.

Imagine, for instance, that your friends are discussing immigration. One brilliant way to use a chat AI would be to ask it to look at the data.

For instance, you could ask Wapo's chat AI (if they had one) this:

And it would then respond:

There is no clear evidence, as shown in an overwhelming number of economic research studies and data, that immigration has brought forth a decline in native-born employment or labor force participation. One way we can see this is by looking at unemployment rates for both immigrant and non-immigrants. And over the past 15 years, the unemployment rate for immigrants has been identical (around 5%) to non-immigrants, so one group isn't taking jobs from the other.

But then you ask:

And Wapo's chat agent would respond:

No, that is a flawed assumption. The reason, according to the U.S. Chamber of Commerce, is two-fold. For one, removing millions of undocumented workers from the economy would also remove millions of entrepreneurs, consumers and taxpayers. The economy would actually lose jobs. Second, native-born workers and immigrant workers tend to possess different skills that often complement one another. So without the immigrants, the US would not be able to compete as effectively as it can today.

Right?

But think about what's happening here. This is not news as stories. This is news as data. But more to the point, this is news being pulled from Wapo based on their reputation... but why would I specifically ask Washington Post about this? In fact, the two responses above were pulled from the Fiscal Times and CNN Money.

You see the problem?

Every newspaper today is based on being the jack of all trades but master of none. They are random packages of news. As such, no newspaper really has the reputation of being the source of anything specific.

This is one of the big reasons why newspapers are failing in the digital world. They have no specific expertise. So, a big part of this future of chat AIs for news is also to think about newspapers as highly specific and specialized niches.

For instance, I wouldn't ask Washington Post about news because I would no idea if they actually knew the answer. But I would ask Politico about political news, FiveThirtyEight about polling data, or Autosport when I wanted to ask about something related to Formula 1.

You see what's happening here?

In order for a chat AI to become useful, it needs to truly know what it is talking about. And just like how X.ai's Amy knows everything there is to know about managing meetings, the future of news chat AIs needs to be centered on a niche as well. But this is an entirely different form of media that most newspapers are ill-equipped to master.

It's not just that the newspaper's editorial focus prevents them from being specific and in-depth. It's also that the journalists are trained to be the 'view from nowhere'. This model doesn't work when news is on demand and linked to a specific intent.

So, we need to reinvent journalism from scratch. We need think about news in an entirely different way. We need to rethink the role of news, the people who bring us the news, and the reasons for why it exists in the first place.

And more to the point, we need to rethink what news is to begin with. Most news today isn't actually news that people can use, and, as such, it's highly unlikely that they would ever engage with it in a chat-like way. Nor is this future of chat-based news about optimizing for views on Snapchat (which is 'the break' mentality).

There is a lot of potential here, and I have no doubt that more and more of our news consumption will shift to this reality in the future. When you look at Siri, OK Google, Alexa and Cortana, it's pretty clear that this is a concept that is here to stay.

Finally, let's talk about the last two types of behaviors, the story/passion and the recline.

The story/passion and the recline

What sets these behaviors apart from the rest is that they are both firmly based on macro-moments. As in, they are moments where you have chosen to dedicate your time to do something.

This can either be something based on a specific interest or intent, as in content you consume because you really want to know about it (the story/passion), or it can be content that you simply consume because you are relaxing after a long day of work (the recline).

Both of these have a lot of potential in terms of content, but not really with chat AIs. The problem again is that chat AIs aren't really capable of maintaining a conversation over time.

It might be that in the future we will be able to walk down the street while talking to an AI (like in the movie Her). It might also be that, in the future, students will be able to do their homework in direct collaboration with a teaching AI (an idea that I'm personally fascinated by), but we are far from this future.

So, Chat AIs aren't relevant for macro moments.

But what we can do is to extend chat into all of these situations, to remove friction. In the past we called these second screen experiences. For instance, you might be watching a movie and wondering about one of the actors, and ask:

And IMDB's chat AI would respond:

Yes, Jessica Chastain played both Commander Melissa Lewis in the Martian and Murphy 'Murph' Cooper, the daughter (adult) of the lead character.

Of course, what this means is that we are back to chat being something we use to lookup or act on things.

Chat AIs themselves are far from being relevant as a macro-moment, but they are highly likely to take on a much larger role as secondary AIs that are taking care of business for us. Or as Amazon illustrates in their latest ad:

These are all second screen experiences (without the screen). Your main focus is on doing something else, and we then use chat AIs to reduce much of the friction that are around us, as well just making our lives more enjoyable and efficient.

Mind you, we have been talking about second screen experiences many times in the past, and they have never really made much of a difference. The main reason was because they were too complicated to use, and they required you to be locked into them. But things are changing.

For instance, some of the early second screen experiences we saw from TV networks were based on people downloading and using a very specific app while watching a TV show. It was a fancy idea, but that's not how second screen experiences work.

Second screen experiences work on impulse. It's not something you plan, or open an app to do. And this is also why I created the concept video before. For this to really work, we need an chat AI operating system that isn't defined around a single AI.

But, again, we are talking about news here. So, what would a second screen news AI be like?

Well, I have a really hard time imagining any situation that would work in terms of newspapers, simply because of how random they are. As you long as you lack the super niche specialization that we would need from a chat AI, you simply can't make it work.

But I see a ton of potential for magazines.

I mentioned Autosport before, which is one of the best magazines for Formula 1 coverage and insights. So imagine what people could do with an Amazon Alexa-like UI while they are watching this weekend's race on TV.

You could ask about pit times, have it do calculations to predict changes and final standings and so many other things. It like having an extended live-stream where you can ask questions on-demand.

We see the same thing with B2B publications. Imagine you are reading a story about a successful brand campaign, but you wonder about their financials. So you ask:

And the chat AI returns a graph illustrating both over the past five years.

There are so many opportunities here. But what's special about these macro moments is that the chat AI isn't about the moment itself, but to function as the extra layer of 'intelligence' that we could get from media companies within each of their specialities.

As you can see, this isn't the old world of news and media. This isn't about writing articles or being journalists. This is about data, understanding that data, and being able to use an AI to fit each moment.

However, I predict that the next five years will be remembered for all their failures. Apple, Microsoft, Amazon and Google are all making really important progress here, and so are many startups within specific fields. But the media isn't.

Everything we have seen so far from the media has been 'pretend AIs', as in chat AIs that look like they are intelligent, but are actually just a redesigned format. And while these might get a boost in traffic in the short term due to their nature of being a gimmick, they are not really part of the future at all.

The problem we have here is the same as what I wrote about VR/360. We need to take things to a completely new level before it actually starts to work in the long term.

QZ's app looks fancy, but it doesn't really have a future because it is based on the model of print publishing (random news). Other tools, like Techcrunch's Chat AI, are slightly better... but are still just search engines for pushing out links via a chat interface. They aren't AIs at all.

The idea that many publishers have, that everyone will be reading articles via chat in the future, is equally foolish. It doesn't change any of the reasons why old media is in trouble today. This future isn't about the format. It's not print versus digital versus chat.

And as I also said, this isn't about chat either. Chat is an obvious interface for this because it's so simple, but the real future is really about defining and using the content as data.

What if you started to assume that people already knew basic news (which is the reality today with social media)? What if your newspaper was defined as this giant and constantly evolving database? And... what if the only interface into this was an AI assistant that could look at this data from any direction you needed?

How would you interact with that? How would you prepare for that? How would you define news? How would you tell a story with that? And for what moments would this be useful?

This is the real future of chat AIs for the media.

Shared By Plus Subscriber

Kal Ström

This is Baekdal Plus content. It is shared with you for free by a member. Please reshare it.

Share on

Thomas Baekdal

Thomas Baekdal

Founder of Baekdal, author, writer, strategic consultant, and new media advocate.

Follow    

Check out my book: THE SHIFT - from print to digital and beyond? Free for Baekdal Plus subscribers, $8.79 on Amazon.

There is always more...