figshare
Browse

Generative AI and Journalism: Content, Journalistic Perceptions, and Audience Experiences

Download (13.11 MB)
Version 2 2025-02-05, 23:57
Version 1 2025-02-05, 01:38
online resource
posted on 2025-02-05, 23:57 authored by T.J. ThomsonT.J. Thomson, Ryan Thomas, Michelle Riedlinger, Phoebe Matich

This evidence-based report aims to familiarise the reader with a wide array of AI in journalism use cases, provide grounding on the legal and ethical issues that journalists and audiences identify regarding this technology within journalism, and reveal news audiences’ expectations regarding how this technology should or should not be used. The report ends with a series of questions for journalists and news organisations to consider as they work through their experimentation with and guidelines around AI use in journalism.

This report brings together six discrete research and engagement activities over a three-year period (2022- 24), drawing on fieldwork in seven countries (Australia, Germany, USA, UK, Norway, Switzerland, and France), and focuses on AI in journalism within three broad domains: AI-generated content in journalism, journalists’ perceptions of and use of AI in journalism, and news audiences’ perceptions of and reactions to this technology being used in journalism.

Key findings

  • AI bias can take many forms. Although AI exhibits well-known biases against, for example, women and people of colour, our early research also identified lesser-known biases, including environmental biases (favouring the urban rather than non-urban environment when none was specified in a prompt), role and ability biases (showing women less often in more specialised roles and ignoring people living with disabilities), and class biases (over-representing people from seemingly middle-class, white-collar backgrounds). These biases exist because of human biases embedded in training data and/or the conscious or unconscious biases of those who develop AI algorithms and models. Attempts to algorithmically “force” output diversity by default have failed, sometimes dramatically. Stakeholders should also consider the influence that English-language training data have on outputs, as well as the effect of how certain historical materials haven’t been digitised and how this can influence output results. Many AI models are trained from widely scraping the internet and biases in these models can be especially prevalent. News outlets sometimes turn to training AI models on their own content in a bid to provide more localised, relevant, and quality results but, given research continues to find the news media reinforces the status quo, this shouldn’t be seen as a panacea to the issue of algorithmic bias.
  • AI tools and the underlying models that power them are almost always frustratingly opaque. Without transparency into source material and the ways algorithms work, AI tools and AI-generated content pose a challenge for journalism, which has historically prized verifiability, authentication, and to a certain degree, transparency. AI tools that explain their decisions, disclose their source material, and are transparent in resulting outputs about when and how they are used are less risky for journalists than tools that do not.
  • The news workers we interviewed were more comfortable with using AI for (predominantly non-photorealistic) illustrations compared to using AI as a replacement of or supplement to camera-based journalism. These participants were also more comfortable with visual AI being deployed in certain parts of the newsroom (features, design, and opinion) over others (news) and thought that smaller and less-resourced newsrooms would deploy AI in riskier and more ethically challenging ways than large and better-resourced newsrooms. This isn’t necessarily the case, however, as both large and small news outlets are experimenting with AI in sometimes very public and audience-facing ways.
  • Both journalists and audience members are concerned about the potential of AI-generated or -edited content to mislead or deceive. The concern topped the list of challenges for both journalists and news audiences. Our interviews over the past three years with journalists in newsrooms of varying size and country orientation also revealed that journalists are, overall, poorly equipped to identify AI-generated or -edited content and that few have systematic processes in place for vetting user-generated or community contributed visual material. At the same time, few of our interviewees were aware that AI is increasingly and often invisibly being integrated into both cameras and image or video editing and processing software, so AI is sometimes being used without the journalists or news outlet even knowing.
  • Journalists and audience members are also concerned about the effect that generative AI will have on human labour, ability, and on broader social structures. These fears reflect a long history of technologies impacting on human labour forces in journalism production. Journalists were concerned about the potential for job loss and for AI to be used to justify further cuts (while management was generally more open to experimenting with and adopting AI processes in the newsroom). Conversely, audience members were concerned that AI-generated or -edited journalism was inferior to human-produced journalism and that computer-generated or -edited journalism lacked uniquely human traits of sensitivity, adaptability, humour, and empathy. Audiences were also concerned that fewer journalists might be employed if AI could make newsroom processes more efficient and worried about the effect on democracy of having fewer journalists employed and able to hold power to account.
  • At the time of our interviews, a minority of the outlets whose staff we interviewed had policies in place about generative AI. Those that did often had internal-only policies or publicly available principles that lacked concrete guidance on day-to-day use. Both journalists and audience members want news outlets to have policies on AI and, when possible, for policies to be standardised across the industry for consistency and to uplift audience trust across the news sector. We agree that news organisations should have AI policies in place and encourage newsroom leadership to explore academic research on AI policies or consult AI policy templates when thinking about and developing their outlet’s own approach.
  • Both news audiences and journalists thought transparency about when and how AI was used was important. Audiences said they wanted context around AI to be clearly conveyed at the beginning of the journalism (whether in video, audio, or written form). They also desired a sense of the percentage of AI use in the generation or editing of news content and for labels to be in the same place each time. Our participants also wanted the labels on the content itself rather than adjacent to it, to have the industry adopt a universal symbol that denoted AI-generated or -edited content, and also appreciated an on-demand label that could expand with more context when the audience member desired it.
  • Only a minority of our interviewees were confident they had encountered AI-generated or -edited content in the journalism they consumed. However, half of our participants either suspected they were consuming AI-generated or -edited content or were unsure. Of those who had encountered AI-generated or -edited content in journalism, the most frequent type was in reporting on AI. Participants also reported seeing AI-generated text, seeing AI content masquerading as news, seeing AI-generated images in news, seeing AI-translated journalism, seeing AI-edited journalism, and seeing an AI-generated weather report.
  • Audience members we interviewed were more comfortable with AI tools or processes being used in journalism when they themselves had firsthand experience with such tools. As examples, participants said they noticed when inserting an image in Microsoft Word or PowerPoint that the software automatically created an alt-text description of the image for vision-impared users and, therefore, felt more comfortable in general with journalists using computer vision to identify subject matter in images and add keywords automatically rather than doing this manually. Audience members also had familiarity with AI-generated blurred backgrounds in Zoom or equivalent video calls and with AI-generated blurred backgrounds in iPhone’s “portrait” mode so felt more comfortable, in general, with journalists using these same everyday tools.

Funding

Design and Creative Practice, Information in Society, and Social Change Enabling Impact Platforms at RMIT University

Weizenbaum Institute for the Networked Society / German Internet Institute

Center for Advanced Internet Studies

Global Journalism Innovation Lab

QUT Digital Media Research Centre

Australian Research Council through DE230101233 and CE200100005

History