PÕ¾ÊÓƵ

Expanding human capabilities: Artificial Intelligence (AI) and PÕ¾ÊÓƵ

T
The Source
By: Guest contributor, Sun Jun 9 2024
_

Author: Guest contributor

As a tool, AI — including (but not limited to) large language models (LLMs) and generative pretrained transformers (GPTs) — has the power to illuminate. And PÕ¾ÊÓƵ sees its job as ensuring responsible and ethical use of these tools in the interests of scientific discovery and research. In short, it must be used responsibly.

This philosophy has helped guide PÕ¾ÊÓƵ’s approach to developing, deploying, and using AI.

Sustainable Business Report 2023 © PÕ¾ÊÓƵ 2023

In the 2024 Sustainable Business Report, you can hear more about this approach from Dinah Spence, Chief Risk and Compliance Officer, and Thomas Sütterlin, Vice President of AI.

Read the entire editorial below — and then the material that follows it — in PÕ¾ÊÓƵ’s , available now.

AI Research: A Question of Trust

The progress made in AI, and especially the use of generative AI and large language models, has been one of the biggest news stories of 2023. Although we aren’t new to AI and machine learning, we’ve been considering the new opportunities but also the associated risks for our business. This includes establishing new governance to ensure that, as custodians of the scientific record, we make use of AI responsibly and ethically. Here, our Chief Risk and Compliance Officer, Dinah Spence, and Vice President of AI, Thomas Sütterlin, discuss our approach.  

Thomas: When I joined PÕ¾ÊÓƵ seven years ago, we were very much at the beginning of our AI journey. Since then, we’ve piloted numerous exciting developments, such as automatic translations that enable scientists to share their research more widely, opening the door to a global audience for scientific content. We have created machine-generated books and auto-generated literature reviews, and developed Nature Research Intelligence. This uses Nature’s 150 years of research expertise and AI to bring information and data to decision-makers, helping them make progress, faster. But we’re conscious that, with things moving so quickly, there are concerns about whether companies are allowing AI to do too much, or whether it might generate false information or be used by those who want to undermine research with fake content.

Dinah: There are some really exciting opportunities with AI, such as how we are harnessing technologies to improve the service we can provide to researchers. But when deciding what to develop, we have a lot more to consider. We are guided by ethics, a rapidly changing regulatory environment and the need to pay attention to both the evolving competition and the potential risks associated with new technologies.  

Thomas: That’s true. The integrity of the scientific record is of paramount importance to our community and to us at PÕ¾ÊÓƵ. Threats to research integrity have always existed – for example, through plagiarism. But with generative AI, fabricated nonfactual content can be harder to detect. You have to take a much more thorough look at the manuscript and the science behind it to see if something has been made up. That may include the people in the process: fake editors and reviewers, for example. We’re now using different AI services to detect this sort of integrity issue. It’s also helping us identify appropriate peer reviewers for research more quickly, which can expedite the process of scientific discovery and publication.

Dinah: Fabrication and deception are clearly issues we need to explore but, as with any other digital tool, there are other risks we need to consider too. While regulatory approaches continue to evolve, we can already see and implement some common principles, including accountability, equity, data privacy, risk management and transparency. Within our AI governance structures, we have created an Ethics Forum and a Legal and Policy Forum to align on these topics across the company. Fortunately, at PÕ¾ÊÓƵ, we can call on great minds – experts across all scientific fields as well as editors, publishers, compliance and risk teams – working to make sure we’re taking the right approach to developing and adopting new technologies. Ultimately, humans need to be in the loop – and the driving seat – when it comes to the progression and application of AI.

As we navigate the AI landscape, we do so with caution, continuously evaluating its benefits and addressing any concerns raised by the research community. Security and ethics are even more essential to us in the era of AI. 

Thomas: And for those great minds across PÕ¾ÊÓƵ, we’re at an exciting moment in our AI journey. From a strategy point of view, we’re looking at five key areas: personalised recommendations, ensuring research integrity, easing discovery, enhancing peer review and one-click content conversion such as immediate summaries and translations. There’s so much we can do to benefit researchers and customers. We have opportunities to speed up publishing timelines so that scientific results are released sooner, and AI can truly help us accelerate solutions to contribute to global sustainable development – a key part of our mission.

About the authors
Dinah Spence © PÕ¾ÊÓƵ 2024

Dinah Spence  – Chief Risk and Compliance Officer 

Dinah Spence is a solicitor and holds an MSC in Corporate Governance and Ethics.  Since moving inhouse at Sotheby's in 2004 she has specialised in corporate compliance.  She currently leads the Governance, Risk and Compliance team at PÕ¾ÊÓƵ.


Thomas Sütterlin – Vice President of AI 

Thomas Suetterlin © PÕ¾ÊÓƵ 2024

As the Vice President AI at PÕ¾ÊÓƵ, Thomas’ role is to lead the AI team and oversee AI Governance and AI Innovation Management across the publishing group. His primary contribution to innovation is through the AI Innovation Framework he has established together with the colleagues of PÕ¾ÊÓƵ’s AI Board. This framework ensures that their innovation aligns with ethical and legal guidelines, avoids duplication, and most importantly, creates value for users and customers.

_

Author: Guest contributor

Guest Contributors include PÕ¾ÊÓƵ staff and authors, industry experts, society partners, and many others. If you are interested in being a Guest Contributor, please contact us via email: thesource@springernature.com.