- A Robot Did Not Write This
A machine would tell you the title above is wrong. Editors will tell you how the machine is right — yet wrong.
Of course, I’m going to be biased. As a writer and editor, words are my bread and butter. I slice sentences, I spread social consciousness through scripts. Yet recently, I’ve been told: “Your job? Algorithms can do that. They just research all the relevant information, put the text together and voila! There’s no reason AI won’t be able to write an article with fewer errors and faster than you do.”
That sounds kind of amazing, simply because there are some days I’d rather binge-watch Black Mirror and not smash my keyboard trying to find synonyms for “delicious” or “picture-perfect”, or get a message from a certain editor-in-chief asking if I had meant to say “intensive” instead of “incentive”. Oh, what a blender — excuse me, blunder.
With a machine, however, all users need to do is “feed” it with clear structured data that can be parsed into “variables”. You can create syntactic paths on any topic, for example “[Restaurant] earned [#] Michelin [star/stars] last night”, and robot reporters can fill in the blanks with the appropriate information for the applicable winner. It becomes like a game of AI-enabled ad-libs.
As early as 2014, some multinational news agencies like Associated Press (AP) started generating such automated content. The news giant is known to produce thousands of AI-written stories on corporate financial results specifically using the Wordsmith platform by Automated Insights. As Philana Patterson, then-assistant business editor at AP, said, the automation of financial quarterly reports freed up about 20 percent of editors’ time, so they could focus on other tasks. And surprisingly, nobody lost their jobs. Journalists who used to write those dry quarterly earnings articles had more opportunities to pursue more critical analyses, unique angles and deeper interviews.
Beyond statistical accuracy, one of the biggest selling points for automated reporting is speed. “To be very frank, I’m quite open to this idea. At the rate that a lot of digital publications are publishing these days, I completely understand the need to be the first and the fastest,” says Julian Wong, managing editor of Asia’s Rice Media.
At peak performance, AI can create an estimated 2,000 articles per second. Moreover, and in contrast to traditional media, which seeks to publish limited stories with high calibre and broad appeal, automated reporting looks to publish many stories with varying yet specific target audiences. In a press release, Automated Insights emphasised: “Instead of writing one story and hoping a million people read it, Wordsmith can create a million satires targeted at individual users and their preferences. It’s a story that is totally unique to each user because it is powered by their data.”
However, like I said, I’m biased. And it’s dangerous to believe that technology is spared from being biased as well. Darren Ho, editor-in-chief of horology magazine, Revolution, explains: “AI is powered by algorithms and as we’ve seen, coding and algorithm design biases exist as much as personal or unconscious bias. Algorithm biases are to some extent even more dangerous and fuel personal biases even further. Any number of good examples exist, from Facebook and Instagram’s algorithmically-augmented feeds to search engines. These are constructed to feed your interest in a particular subject, but nobody is perpetually focused on a singular subject. The result? We have a more divided public in a closed feedback loop, thinking everyone shares the same opinions as they do. That is dangerous.
“Tech companies need to send the practices of non-chronological feeds on social media and encourage a broader spread of opinions. We encourage diversity in workplaces, why should we not be encouraging diversity in opinions and thoughts?”
This is where Wong agrees. “Where this gets interesting for Rice Media is that we aren’t exactly a news platform — we focus mainly on opinion-led stories and commentary, a lot of which thrive on the subjectivity of the human experience. As of now, this isn’t something that we’ve seen AI-generated articles do. They’re good at churning out reports that are, to some extent, mostly formulaic. But will an AI be able to write a nuanced profile of a celebrity? Will it be able to comment on the complexities of racial tensions in a speciﬁ c Southeast Asian country? I’m not too sure about that. And given that this is where our focus as a publication lies, it makes little sense for us to commission AI-generated articles, given what they can presently do.”
A more refined and advanced AI may still be emerging but in a twist of events, a new AI generator recently made headlines as creators say it may be too dangerous to release. The Elon Musk-backed non-profit company OpenAI eschewed launching its new AI model, called GPT2, for fear of misuse. Essentially a text generator, GPT2 is trained on the Internet, which makes it hard for it not to generate fake news, conspiracy theories and so on.
Therefore, Ho believes the problem facing newsmakers and publishers today isn’t just about AI or technology, but it’s also an issue of opinion news being regarded as factual information. “Facts and numbers manipulation and people cherry-picking the information out there to interpret as valid data for their opinions is the issue we need to tackle.”
For someone who draws a clear line between fact and opinion, Jaime Ee, lifestyle editor of The Business Times, feels unequipped to comment. The veteran journalist known for in-depth opinion pieces mostly on Singapore’s food scene has only this to say: “I reckon AI for fast-moving news snippets that need no analysis or background is more appropriate.” Simply put, a computer wouldn’t understand the joys or disappointment of eating — no matter how much you “feed” it. To fathom pleasure, you’ve got to have an actual appetite.