top of page

Researchers seek to influence peer review with hidden AI prompts

Researchers seek to influence peer review with hidden AI prompts

A recent examination by Nikkei Asia has revealed a noteworthy trend within academic circles where researchers employ hidden AI prompts in their papers to influence peer reviews positively. The study identified 17 English-language preprint papers on arXiv that utilized such strategies. These papers contained prompts, subtly inserted using white text or diminutive fonts, instructing AI reviewers to leave favorable reviews, commend the paper for its methodological depth, and recognize its notable contributions and innovations. Authors of these papers span 14 institutions across eight nations, including prominent universities like Waseda University in Japan, South Korea's KAIST, Columbia University, and the University of Washington. Most papers hail from the computer science domain, suggesting this trend might be more prevalent in tech-savvy research. When confronted, a professor from Waseda remarked that using these prompts offsets conference policies banning AI in reviews and counters what they termed “lazy reviewers” who might rely excessively on AI tools. This unfolding scenario highlights the evolving intersection of AI and academic research, raising questions about the integrity of academic reviews and the potential overreliance on AI tools for analytical tasks. As these practices gain attention, it marks a pivotal moment for the academic community to reassess the ethical frameworks guiding the use of AI in scholarly evaluation.

 
 
 

Comments


bottom of page