In Case You Missed It: Degenerative AI

30th July 2024

Welcome toICYMI– a weekly snapshot of European news stories that have given me pause for thought. ICYMI is a chance for you to go beyond the front-page headlines and find out what other stories may be worthy of your attention.

Like many companies, we’ve been experimenting with how to responsibly use generative AI. One area that we have chosen to make off limits across the agency is using AI to create content such as blog posts and byline articles. Just like the media we work with, our bar for quality and originality is extremely high. Even if the quality of AI-generated content improves – and significant improvement is needed – it will never be of the same quality as original content drafted using the human brain .

One particular consequence of relying on AI for content creation has become even clearer in the last few days, with the publication of a research study by a group of top UK universities. It warns of ‘model collapse’ when AI is trained on output from other generative AI models.

The paper describes how ‘indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, ’. In other words, a loop effect is created when AI is being trained on datasets that have also been created by AI. This means – to quote the Independent’s headline on this story – that ‘AI systems could be on the verge of collapsing into nonsense.’

Potential solutions are offered, such watermarking AI-generated content, but the ease with which this approach can be bypassed and the fact it would require collaboration between all the major generative AI companies, means it is hardly a silver bullet.

The Stack has also covered this news, highlighting a part of the report which says that, “The value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content, in data crawled from the internet.”

In other words, when it comes to content, we cannot purely rely on AI. It’s all about quality and originality.

As content specialists helping our clients create op-eds and blogs on often very complex areas, this story really caught our eye. There is an increasing amount of AI content out there, and there is probably a time and a place for at least some of it. However, many of the journalists we work with are now using AI detection tools, so that at least they can be clear with their readers as to where AI has been used and not. At a point in time where trust in the online world is low and the threat of misinformation is very real, it’s easy to see the struggle journalists are facing.

For us, working to maintain and build the reputations of our clients, for now the line is clear. We prioritise original, quality content. I can say with confidence that none of our written work will ‘fall into gibberish and nonsense’ and there are certainly no ‘models collapsing’ at Tyto. You won’t find us creating byline articles and blog posts on ChatGPT.

8887In Case You Missed It: Degenerative AI
About the author

Zoë Clark is a Senior Partner and Head of Media and Influence at Tyto. She has led PR at RBS and Qlik, and worked with global brands including Barclays, Mastercard and SAS.

Category:
Insights