Published
March 17, 2026

AI in Publishing: Questions from the Publisherspeak Community Answered

Featuring insights from Tim Lloyd, Founder and CEO of LibLynx

At Publisherspeak US 2025, a panel discussed how publishers are dealing with the fast-paced changes that artificial intelligence is bringing to scholarly publishing.

The panel, titled “A Practical Guide to Managing an Evolving AI Landscape: Collaboration, Provenance, and Practice!”, explored questions about responsibility, transparency, and how AI is used in publishing. 

Dr. Chhavi Chauhan, founder and President of Samast AI, led the discussion. The panelists were Tim Lloyd (Founder and CEO, LibLynx), Wendy Queen (Chief Transformation Officer, Johns Hopkins University Press), and Gary Price (Curator/Editor, Library Journal’s infoDOCKET).

Attendees of the Publisherspeak US 2025 conference sent in thought-provoking questions for the panel. In this blog, we are joined by Tim Lloyd, who shares his responses to those questions.

1. “If responsible use of AI in publishing results in fewer accepted papers due to better filtering of low-quality or redundant work, how can publishers reconcile this with business models that rely on content volume and revenue growth? Could less be ok?”

Tim: A fundamental aspect of the value that publishers add to manuscripts is improving their quality, and poor quality publications actively damage the credibility of the publisher (and scholarly publishing in general).  The days when legitimate publishers were happy to push for volume over quality are gone, and there’s a general recognition of the need to focus back on quality.  I see an opportunity for publishers to grow their operations by leaning into quality - positioning themselves as trusted editors and curators of research that offer a lot more than simply an online tool.  With the right strategies, AI can support this type of model by improving quality and increasing cost efficiency.

2. “Authors are increasingly required to disclose AI use in manuscript preparation—should publishers be held to similar standards by publicly disclosing what AI tools they use in editorial workflows such as peer review, copyediting, or desk rejections?” 

Tim: It’s clear to me that AI will become business as usual over the medium term, with almost every tool incorporating some element of AI.  To try and document and disclose every use of AI within a publishing workflow will become as meaningless as documenting every time a particular software language is used.  I suspect the same is likely to happen on the author side - authoring tools of all types will increasingly rely on AI to assist in a variety of drafting tasks.  I view this as different from the more fundamental requirement that authors own their manuscript, regardless of how AI has supported them.  So, no to publisher disclosures, and yes to authors taking responsibility for what they submit.  

However, in the short term, the publishing community is still establishing norms and best practices in this area and authors will have questions and concerns.  It makes sense for publishers to have transparent policies about their use of AI and it may be worth being more granular about what this means in practice.

3. “Could AI be used to shift the narrative around retractions—from one of failure to one of progress and correction?”

Tim: An example of where AI can help is to flag issues that unintentionally cause a retraction, such as incorrect data tables or inconsistent images, and resolve them before publication.  I’ve encountered examples like this where manuscripts were flagged as fraudulent but a conversation with the author revealed that it was a simple mistake that didn’t alter the fundamental value of the underlying research.  Tools like this could greatly help authors who make an error, saving all the time/effort associated with processing a retraction.

4. “Are the AI-related risks and benefits significantly different for HSS compared to STEM fields, and how might that influence infrastructure and policy decisions?”

Tim: I see some material differences.  On the risk side, it’s currently easier for AI to unambiguously analyse facts and logical assertions than the conceptual arguments that often underpin HSS research.  This may make it easier to reliably apply AI tools to STEM research, at least until AI gets more sophisticated.  On the benefit side, the economics of HSS publishing can be brutal - both more expensive (humanities monograph vs science article) and lower volume - so the potential cost efficiencies from using AI could have a greater impact on HSS publishing.

5. “How can the publishing ecosystem best employ AI technologies to improve the accessibility (i.e., WCAG conformance) of content, platforms, and distribution chain.”

Tim: Accessibility needs to become as much a standard part of the publishing process as formatting XML or citations.  AI can make a significant contribution here, both in identifying and automatically fixing formatting issues, as well as recommending solutions where user input is needed, such as text labels.

Conclusion

These questions from the audience show what publishers are thinking about as AI becomes a bigger part of research and publishing. Areas of concern include quality control, responsibility, accessibility, and how different fields within the publishing industry are affected.

As Tim’s answers show, many of these topics are still changing. Some areas are clearer now, but others will continue to develop as tools improve and the community learns more.

We thank the Publisherspeak community for their thoughtful questions and Tim for sharing his perspectives!

Find out more about the Publisherspeak community here.
Publisherspeak
No items found.
Table of contents

Frequently asked questions

Everything you need to know about the product and billing.

Latest from the blog

Ready to witness what agility
in publishing looks like?