How will AI affect the way we do science? That was the question debated at a recent forum of leaders from academia and science communication in Japan. Simon Pleasants, senior editor, and Hiromitsu Urakami, academic engagement director, Japan, report on the discussion between leaders in academia and science communication at the Japan Research Advisory Forum in 2023.
Ever since the launch of ChatGPT in November 2022, artificial intelligence (AI) — in itself an established area of research — has been stirring up intense debate about how it should be implemented and what effects generative AI in particular will have on society. It tends to generate two strong emotions simultaneously — excitement over its potential to transform many areas of human endeavor, coupled with a deep-seated anxiety about the negative effects it could unleash.
As CEO Frank Vrancken Peeters has said before, PվƵ has been using AI for over 10 years. Working with our communities and partners, we have been exploring how AI and other emerging technologies can help us accelerate the development of solutions to the world’s urgent challenges; unlock the potential of science and research; advance knowledge; and inspire and improve the lives of generations to come. But while the use of technology for us is not new, combining that technical knowledge with our commitment to ethics and the research community is being very hotly debated in our sector, and it was a dominant topic at the recent PվƵ Japan Research Advisory Forum.*
The question that the panel wrestled with was how will AI affect research practice and publishing?
AI to reshape how science is conducted
Magdalena Skipper, editor-in-chief of Nature and chief editorial advisor of Nature Portfolio, noted that AI will reshape how science is done. She also said that it will be impossible to ignore AI. It is such a disruptive technology that Nature has launched a on how AI is transforming the scientific enterprise and had conducted a survey of 1,600 researchers to better understand their attitude to it.
Yuko Harayama, professor emeritus of Tohoku University and co-chair of the Japanese Association for the Advancement of Science, echoed these thoughts and noted that AI raises profound questions about the fundamental nature of research. “What are the essential rules and practices for doing science? What are the core parts of doing research and how will researchers be influenced?” she asked. “We need to revisit these questions in light of AI.” No other technology has caused scientists — and indeed society as a whole — to do so much soul-searching.
AI to accelerate the process of discovery
AI can tackle problems that are intractable for humans alone. Aya Furuta, senior writer at Nikkei and editor and writer at Nikkei Science, shared a comment by a former particle physicist who said that the traditional way to conduct science is to create models to explain things, but there’s a new way to understand phenomena. The physicist had also said that we’ve done as much as our brains can understand and that progress in science is slowing. Therefore, we may need another way forward.
Amane Koizumi, project professor at the National Institutes of Natural Sciences, shared his view that new discoveries often happen by chance, and AI can be used to accelerate that process of discovery by serendipity. AI is good at generating lots of ideas. While most of them can be discarded, 0.1% of them might prove to be valuable. Although AI or human intelligence cannot be fully trusted, he believes that the opportunity for new innovation lies in that 0.1%.
Some forum members mentioned that they were very optimistic that AI will help us develop new technologies. AI forces us to think about why we do science. To improve our lives, maybe we don’t need to necessarily understand how things work.
But does that mean humans will have a minimal role to play? “Can we envision having an AI tool that you feed data into and out pops a paper?” asked Skipper, to stimulate further discussion. “Is that how we will end up doing science?”
Rising concerns over AI
Many other areas of concern about AI were mentioned during the discussion, including job security, information overload, people thinking less, reproducibility, transparency, and reinforcement of biases. Kei Kano, a professor at Shiga University, raised concerns about inclusiveness, mentioning his work on Indigenous technology called “Kakishibu,” fermented persimmon juice, which has had a long history, but there are not enough written records on how to make it and how it works. How could AI include Indigenous knowledge?
One specific area that AI could facilitate research is the peer-review process of papers. Forum members pointed out that when it comes to peer review, humans are incomplete and imperfect — we all have our own biases. AI can provide greater coverage, but there are concerns that how AI arrives at decisions is inscrutable. On the other hand, it was pointed out that human peer review is completely a black box. Thus, the need to continue to trust humans was raised as trust is a very human trait.
In summing up the session, Antoine Bocquet, managing director of PվƵ Japan, noted that there was general agreement that AI would greatly benefit research, but at the same time there was a long list of concerns that were raised by panel members. Certainly, AI promises to usher in a brave new world that will impact all aspects of scientific research and publication.
###
Please see the following links on how AI will affect science
###
This piece forms part of a series of blogs from the Japan Advisory Forum, the first of which covered science communication. The Japanese version of the JRAF discussion can be viewed in .
###
*As one of the world’s largest publishers of research and education content, PվƵ is committed to opening the doors to discovery and ensuring that the research community is provided with the platforms and resources to leave a lasting impact on society. As part of that ongoing commitment, PվƵ held its inaugural Japan Research Advisory Forum in 2022 and the second forum in 2023. The 2023 forum members (affiliations at the time of the forum) are:
###