CycleResearcher: Improving Automated Research via Automated Review

Yixuan Weng, Minjun Zhu1,2, Guangsheng Bao1,2, Hongbo Zhang1,2, Jindong Wang3
Yue Zhang1†, Linyi Yang1
1Westlake University 2Zhejiang University 3William & Mary
wengsyx@gmail.com, {zhuminjun, yanglinyi, zhangyue}@westlake.edu.cn
main.

Abstract

The automation of scientific discovery has been a long-standing goal within the research community, driven by the potential to accelerate knowledge creation. While significant progress has been made using commercial large language models (LLMs) as research assistants or idea generators, the possibility of automating the entire research process with open-source LLMs remains largely unexplored.

This paper explores the feasibility of using open-source post-trained LLMs as autonomous agents capable of performing the full cycle of automated research and review, from literature review and manuscript preparation to peer review and paper revision. Our iterative preference training framework consists of CycleResearcher, which conducts research tasks, and CycleReviewer, which simulates the peer review process, providing iterative feedback via reinforcement learning. To train these models, we develop two new datasets, Review-5k and Research-14k, reflecting real-world machine learning research and peer review dynamics.

Our results demonstrate that CycleReviewer achieves a 26.89\% improvement in mean absolute error (MAE) over individual human reviewers in predicting paper scores, indicating that LLMs can surpass expert-level performance in research evaluation. In research, the papers generated by the CycleResearcher model achieved a score of 5.36 in simulated peer reviews, surpassing the preprint level of 5.24 from human experts and approaching the accepted paper level of 5.69. This work represents a significant step toward fully automated scientific inquiry, providing ethical safeguards and advancing AI-driven research capabilities.

Experiments

CycleReviewer

CycleReviewer outperforms both proprietary systems and human reviewers, achieving a 48.77% reduction in Proxy MSE and a 26.89% reduction in Proxy MAE compared to human reviewers, with a decision accuracy of 74.24%.

Method Proxy (Reviewer=n-1) Proxy (Reviewer=n) Decision
MSE ↓ MAE ↓ MSE ↓ MAE ↓ Accuracy ↑ Macro F1 ↑
Human Expert Individual 2.34 1.16 - - 75.40% 75.39
GPT-4o-mini 3.44 1.53 2.98 1.40 53.06% 34.72
GLM-4 4.45 1.81 3.91 1.70 49.49% 33.10
DeepSpeak-2.5 4.62 1.83 3.72 1.64 45.11% 39.98
Gemini-1.5-pro 3.02 1.34 2.56 1.23 50.98% 50.75
Claude-3.5-Sonnet 6.40 2.23 5.62 2.12 48.05% 32.44
GPT-4o 6.61 2.24 6.53 2.30 52.58% 34.51
CycleReviewer (123B) 1.43 0.92 1.25 0.87 74.24% 73.99

CycleResearcher: Overall Score

CycleResearcher-12B achieves an average score of 5.36 and an acceptance rate of 35.13% in simulated peer reviews, surpassing the preprint level of 5.24 and approaching the accepted paper level of 5.69.

Paper Type Source Overall Score Metrics Accept Rate
Avg Min Score ↑ Avg Max Score ↑ Avg Score ↑
Conference Accept Papers Human Expert 3.91 6.98 5.69 100.00%
Preprint Papers Human Expert 3.24 6.62 5.24 29.63%
AI Scientist AI 2.20 5.70 4.31 0.00%
CycleResearcher-12B (Ours) AI 3.47 6.75 5.36 35.13%
CycleResearcher-72B (Ours) AI 3.65 6.58 5.38 33.64%
CycleResearcher-123B (Ours) AI 3.31 6.42 5.13 21.19%

CycleResearcher: Soundness, Presentation, and Contribution Analysis

Across all three dimensions, CycleResearcher-72B shows the strongest overall performance with scores approaching human expert levels, particularly in presentation (2.88) and minimum soundness (1.86).

Image 1

Expert Evaluation

Human experts rate CycleResearcher's papers significantly higher than AI Scientist's across all metrics, with an average overall score of 4.8 versus 3.6, validating the model's superior performance.

Model Avg. Overall Avg. Soundness Avg. Presentation Avg. Contribution
AI Scientist 3.6 2.2 2.6 1.8
CycleResearcher 4.8 2.6 2.8 2.2

Get Start

Install

Our model supports transformers, and you can directly load it using transformers' AutoModelForCausalLM. Considering that our trained model includes both paper and outline sections interleaved during the generation process, we recommend following the provided tutorial step by step to run it and obtain a completely new paper.

The results generated by the model are entirely fictional, and it is strongly recommended to conduct scientific discoveries by appropriately following the experimental settings provided by the model.
git clone https://github.com/zhu-minjun/Researcher
pip install .
          

📚 Tutorials and Demos

We have prepared comprehensive tutorials for both CycleResearcher and CycleReviewer to help users better understand and utilize these models. Our tutorials cover everything you need to get started and make the most of our model suite.

Available Tutorials
  • Tutorial 1: Getting Started with CycleResearcher 🚀
  • Tutorial 2: Understanding CycleReviewer 📝
  • Tutorial 3: Advanced Features🔥 Rejection Sampling with CycleResearcher and CycleReviewer [TODO]

BibTeX

@inproceedings{
weng2024mastering,
title={Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks},
author={Yixuan Weng and Minjun Zhu and Fei Xia and Bin Li and Shizhu He and Kang Liu and Jun Zhao},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=9nsNyN0vox}
}