peer review

peer review

Testing Academia.edu’s AI Reviewer: Technical Errors and Template-Based Feedback

Academia.edu has recently introduced an AI-based Reviewer tool, positioned as a solution for generating structured feedback on academic manuscripts. While the concept is promising, our evaluation revealed a number of significant limitations. We encountered recurring technical issues during both file uploads and Google Docs integration, often requiring multiple attempts

Testing the Limits of AI Peer Review: When Even Ian Goodfellow Gets Rejected by OpenReviewer

High-quality feedback is essential for researchers aiming to improve their work and navigate the peer review process more effectively. Ideally, such feedback would be available before formal submission—allowing authors to identify the strengths and weaknesses of their research early on. This is precisely the promise of OpenReviewer, an automated

Testing a GPT Academic Reviewer for Pre-Submission Manuscript Improvement

With the rapid advancement of GenI tools, researchers now have access to innovative ways to improve their manuscripts before journal submission. One such method involves using custom GPT-based reviewers to gain preliminary feedback. To practically evaluate the usefulness of these GPT-driven reviewing tools, we tested the Scientific Paper Reviewer – ChatGPT