Miklós Sebők - Rebeka Kiss

Searching for Literature with GenAI: Do the Latest Models Deliver Greater Accuracy?

In our earlier blog post, Harnessing GenAI for Searching Literature: Current Limitations and Practical Considerations, we examined the reliability of generative AI models for scholarly literature searching. To assess whether the newest releases represent any improvement, we tested them on the same narrowly defined academic topic. The results indicate modest

Evolving File Handling in GenAI Models: Stronger Input Support, Persistent Output Limitations

In a previous blog, we examined the file handling capabilities of leading GenAI interfaces. That analysis detailed which formats they could process reliably and where they encountered difficulties—particularly with structured data and technical file types. Since then, the landscape has shifted. While downloadable file generation still faces notable constraints,

Can OpenAI Agent Support Academic Research? A Practical Comparison with Manus.ai and Perplexity

We tested the new OpenAI Agent to assess its usefulness in academic research tasks, comparing it directly with Manus.ai and Perplexity’s research mode. Our aim was to evaluate how effectively each tool finds relevant scholarly and policy sources, navigates restricted websites (including captchas and Cloudflare protections), and allows

Enhancing Research Productivity: A Comprehensive Guide to Canvas and Artifacts in GenAI Interfaces

These tools provide dedicated spaces for creating, refining, and managing content alongside artificial intelligence interactions, facilitating tasks from data visualisation to manuscript drafting. This guide, tailored for researchers, examines their practical applications, operational mechanisms, distinctions, version control capabilities, memory management, and availability in other models. By leveraging these interfaces, researchers

Testing Academia.edu’s AI Reviewer: Technical Errors and Template-Based Feedback

Academia.edu has recently introduced an AI-based Reviewer tool, positioned as a solution for generating structured feedback on academic manuscripts. While the concept is promising, our evaluation revealed a number of significant limitations. We encountered recurring technical issues during both file uploads and Google Docs integration, often requiring multiple attempts

Leveraging Generative AI to Verify Journal Guideline Compliance: A Practical Guide for Researchers

Ensuring a manuscript perfectly adheres to a journal's unique and often complex formatting guidelines is a familiar, laborious task for every researcher. It represents a final, time-consuming hurdle before submission, where minor errors can lead to delays or even desk rejection. But what if this critical checking process

Gemini’s ‘Audio Overview’ as a Tool for Open Science: Turning Scientific Papers into Accessible Audio

Can artificial intelligence make academic research more accessible to non-specialist audiences—or even to busy researchers on the go? Gemini’s new ‘Audio Overview’ feature provides a novel way to experience scientific papers: through short, conversational audio summaries. Available even in the free version of Gemini 2.5 Flash, this

Testing the Limits of AI Peer Review: When Even Ian Goodfellow Gets Rejected by OpenReviewer

High-quality feedback is essential for researchers aiming to improve their work and navigate the peer review process more effectively. Ideally, such feedback would be available before formal submission—allowing authors to identify the strengths and weaknesses of their research early on. This is precisely the promise of OpenReviewer, an automated