‘Alexa – what are the headlines this morning?’
Artificial Intelligence is penetrating all corners of life.
We’ve heard the Echo dots; we’ve met Sophia and we’ve asked Siri.
But away from the every day AI that’s rife in society, industries are starting to use the technology to their advantage, and the newsroom is no different.
Organisations like Washington Post, Reuters and Press Association are all using their own forms of AI to improve the processes and systems that journalists are used to.
At Forbes, their AI system ‘Bertie’ has been programmed to give journalists first drafts and templates for stories.
By having an initial structure to their stories, journalists at Forbes now have more time for other aspects of their jobs; interviews, multimedia content and deeper research, resulting in an improvement in both the quality and the quantity of their journalism.
But surely a robot journalist that can do the job of a human journalist for a fraction of the price is going to cause major job losses in an already cut-throat industry?
Fear not, fellow journalists. AI systems – particularly the one at Forbes – have been put in place not to replace journalists, but rather to help them..
Przemyslaw Jarzynski (PJ) is an Innovation Director with a Master’s degree in electrical engineering. An expert in AI, PJ forecasts that journalism and AI have a strong future together, provided journalists embrace the technological advancement and see AI as a tool for assistance rather than a means for competition;
“AI has the potential to change journalism in many ways. It will be a long process and not everybody will embrace AI in journalism, but it is inevitable and AI will have an important role to play in journalism in the coming years and decades.
“AI can provide journalists with information like what is trending in news at the moment, what stories people like to read, even what style of writing or article length is preferred. There are already tools for journalists that do these things but with AI it will be possible to do it on a larger scale, faster and with more accuracy.”
But as with any new technology system, AI comes with its issues, both practically and ethically.
Who do we hold to account when AI technology produces a story with significant ethical issues, such as accuracy?
You can’t sue a robot, but is it the fault of the journalist if an AI generated story is wrong?
“The answer to this question is in my opinion quite simple,” PJ told me.
“AI is just another tool that journalists and publishers are using. So it is a responsibility of the publisher to make sure AI produces accurate news. AI systems are created by humans and they are trained by humans. They use algorithms built by humans… it is possible to define the rules and limits for AI.”
As a general rule of journalistic thumb, we use sources for news that we deem reliable and accurate anyway. There is nothing to stop us from programming robots to do the same thing.
By programming robots to use global news sources such as Reuters and Press Association, we can guarantee that the accuracy issues will be low, meaning journalists can rely on AI to find and process the bigger stories whilst focusing on curating new, exclusive content for themselves.
AI can process huge amounts of data that humans can’t even comprehend.
Whilst this can be a positive thing for generating an abundance of news stories, it also means that fake news can easily slip into the AI’s data-learning processes, contributing to the ever-growing black hole of fake news that journalists are competing against.
Unfortunately, this doesn’t look like an issue that AI can overcome;
“Although the attempts have been made to battle these fake news spreading systems, there are currently no ways, whether using AI or other means, to successfully battle them. This is because the task of separating fake news from real news is much more complex than the task of spreading fake news.” PJ told me.
AI in newsrooms is often programmed to replicate stories it has processed elsewhere based on its learned algorithms. When fake news gets caught up in this process, the AI can not make the conscious decision to separate true from fake, making the fake news issue even more complicated.
Where is AI’s true place in journalism?
The general consensus is that AI is meant as a tool within the journalism industry to allow journalists to focus on generating original, relevant content rather than churning out stories that have come from other organisations.
But as well as just freeing up time, AI can help to reach wider audiences by generating more relevant content, as PJ tells me:
“What we should start to see more and more, is the automation of newsrooms around news writing, news distribution and personalisation. When the journalist finishes the story and presses the publish button, the news will not be actually published but will be placed in a queue and AI will decide when is the best time to publish this particular piece of content to reach the widest audience.
“There are many more opportunities with technology and AI, and this is where journalists can really add value to the news production process; by being creative. And you can only be creative if you have time to think. AI can free up this time for journalists by automating repetitive, mundane tasks.”
To properly make use of what AI has to offer, journalists need to learn to embrace it, following in the footsteps of Forbes and Washington Post.
In an LSE study conducted in November, it was found that less than 40% of news organisations surveyed had a dedicated AI plan, meaning that the industry is desperately dragging its heels.
With AI growing in popularity across the globe, the time for journalists to implement the technology is now if they want to maintain, grow and understand their audiences properly, a final sentiment echoed by PJ:
“Online news publishers need to adapt faster and innovate better rather than always playing catch-up in order to maintain its position and profitability. Maybe AI can be the game-changer to the online news publishers that this industry desperately needs.”