weekly newsletter

the best saturday evening post In your inbox!

Over the past few months, anxiety about the shape of artificial intelligence, particularly large-scale language models such as ChatGPT, has reached a fever pitch. One example of the potential impact of AI is in the areas of information, news, and journalism. People are concerned about the negative impact that deepfakes and other AI-generated disinformation could have on the fate of democracies and free societies everywhere.

At the heart of these concerns is the notion that machines often have no moral or ethical compass, and are sometimes nudged or more explicitly directed by evil human agents, but have no understanding of our reality. The idea is that complex and difficult truths cannot be trusted. Ignore easy solutions.

Put another way, can we trust algorithms to tell us the true story about our world?

Concerns about technology and artificial intelligence (sometimes broadly referred to as “automation”) and their impact on storytelling and journalism have been around since the early 20th century, when editors conjured up the specter of reporter robots. . The actual use of technology in these areas dates back to the 1960s and 1970s, when the first generation of word processor tools appeared. This included very early forays into automated writing, such as spell-checking and grammar-checking, as well as the introduction of software-driven processes for copy editing, layout, and production.

Then, in the 1980s, 1990s, and 2000s, AI (AI as we more or less understand it today) was discussed by journalists in journalism industry publications. This era saw the complete computerization of newsrooms and the introduction of the private internet in the news and media industries.

When thinking about the legacy of AI anxiety, context is important. Knowing where these fears come from will help today's and future news consumers be wiser about when to trust the machines and software that are part of our lives and used for the future. It may help you make a choice. foreseeable future.

The arrival of technology anxiety in newsrooms

During the Great Depression, when journalism jobs and newspapers were struggling to survive, Maren Pugh, editor of a newspaper industry trade publication, said: Editor and PublisherAs a harbinger of further automation, writes about “all-steel robot reporters equipped with photosensitive recording tape and used by the British Broadcasting Corporation to cover political meetings, sports, and public dinners''.

Pew quoted EI Collins of Jersey City, New Jersey. journalhe wrote a little newsroom poem to mark the occasion.

“Yes, robots have replaced us.
And the newspaper game is over –
We have secured our place on the sidelines,
It is no longer a “shadow” of fame.
when the president issues a statement
Or when a long shot is clicked in-game,
Robots advance the story —
But somehow it's not the same. ”

That was in 1934, indicating that news workers have been concerned about how technology will affect their work for almost a century. In fact, concerns about the introduction of machines that could set type, and later software that could perform other tasks traditionally done by hand, such as layout, date back at least to the 1920s and 1930s, if not earlier. It goes back to

In 1952, during the early days of large-scale mainframes, Robert Brown also Editor and Publisherjoked, “It won't be long until we have completely automated newspapers, newspapers that are published automatically without any human intervention.'' He said that since the 1930s, automation such as teletypesetters (which send wired articles to newsrooms) and “automatic pneumatic dispensers, automatic dispensers for coffee and soft drinks” had already taken over jobs from newsrooms. It pointed out.

“Most of the venerable workers in the newspaper field have already been eliminated…all we need is one or two more automatic machines,” Brown wrote (perhaps) jokingly.

Word processing software became widely used in the 1960s and 1970s with the introduction of computers for basic layout and production tasks. These tasks were conducted via shared computer terminals (often used by four or more reporters) and introduced a new reliance on spell checkers and content management systems. This meant that journalists, and more indirectly their viewers, got the news. As Juliet de Mayer, an expert on the early computerization of newsrooms, points out, this is happening, at least in part, from automated systems.

By the 1980s, most reporters had their own desktops, rather than the “dumb” terminals that were common in the 1970s. They were expected to create and modify their stories using the company's intranet and bulletin board, early digital filing and story archive storage systems, and his aforementioned CMS tools. Even before the advent of the internet and more sophisticated AI tools, reporters were already worried about their jobs. editor and publisherFrom the program in September 1986.

Doug Borgstead, “The Fourth Estate” © Editor and PublisherSeptember 13, 1986, used with permission.

In the 1990s, the early commercial Internet and the use of browsers as the go-to place to check facts and find stories created a number of disturbing considerations in journalism industry publications. As search engines seemed to be replacing old-fashioned, goofy reporting (though they weren't), reporters and editors alike were concerned about the impact on their readers.

Marilyn Greenwald, a reporter covering the technology industry, pointed out in 2004 that the Internet and its attendant automated systems may prevent readers from getting verified facts. Or, at the very least, they may not enjoy as much solid coverage as they did in the pre-Internet era.The Internet provided “such a readily available source of information” [that] “It's seductive because it hides the fact that good press almost always requires some sweat,” she wrote. “Reporters, editors, and news directors have little time to tackle long corporate stories that cannot be completed by one person in a day or two. Many students and professionals reporters are not encouraged to seek out information.”

Concerns about modern technology in newsrooms

Since the 2000s, technology and AI have gradually infiltrated both the way news workers do journalism and the way people consume news.

The automation of social media posting in the late 2000s by Facebook and Twitter, and the use of algorithmic story selection, while common today, are less clear-cut, involving a mix of human and machine decisions, and more difficult to unravel. was difficult.

For example, curation of breaking news from mainstream news outlets like NBC News was done by both software and humans. While this is not quite the same as LLMs writing articles or producing other types of narrative “content,” scholars such as C.W. Anderson and Arjen van Dalen have argued that news organizations do Not only that, but we predicted that we would eventually become increasingly reliant on AI. their production.

In 2014, The Associated Press began automating its financial reporting. In 2016, washington post used early AI tools to create stories about the Olympics and that year's elections. Computers have been instrumental in election coverage since the 1950s, but this time was different. The rest of the 2010s saw a similar, fairly quiet experiment, until 2023, and especially this year, when the pandemic burst the issue into the public consciousness (or at least that of journalists).

So why is it important that machine-generated content has a long history? Or do we as news consumers know that AI tools have been in use for at least a decade?

As a media historian who studies the long and troubling evolution of technology, I believe that the people who rely on both human-generated and curated news, as well as AI, need to understand the transparency and ethical use of AI. We believe it is important to expect and even demand.

For accountability, it is important to know who or what is writing about our institutions and culture. For media literacy, it is also important for readers to know. how Learn how to read the news and check facts on their side. Both of these functions are made much more difficult when it is unclear whether a machine, a human, or some combination of the two is working.

Here are some great examples of news organizations responding to these types of concerns: guardianhas set out internal and external guidance on the use of AI in newsrooms. Other news outlets were not so forthright. sports illustratedeither hired a company to use AI to generate fake bylines or were forced to issue updates on 41 of the 77 articles generated by AI tools, similar to CNET. A large number of errors had to be fixed.

The use of machines in gathering, editing, and publishing news is nothing new.

But what? teeth Newness is the degree to which the machine itself makes judgments about newsworthiness. That's why we should be smart news consumers who reward news organizations that practice ethical use of AI, and, to borrow the 1930s newsroom poem referenced above, “give the real punch to a story.” In order to obtain it, Mr. A still demands it.” [human] A reporter will cover it! ”

Saturday Evening Post Become a member and enjoy unlimited access.
Subscribe now



Source link