Enhancing Research Productivity: A Comprehensive Guide to Canvas and Artifacts in GenAI Interfaces

These tools provide dedicated spaces for creating, refining, and managing content alongside artificial intelligence interactions, facilitating tasks from data visualisation to manuscript drafting. This guide, tailored for researchers, examines their practical applications, operational mechanisms, distinctions, version control capabilities, memory management, and availability in other models. By leveraging these interfaces, researchers

by Rebeka Kiss

Testing Academia.edu’s AI Reviewer: Technical Errors and Template-Based Feedback

Academia.edu has recently introduced an AI-based Reviewer tool, positioned as a solution for generating structured feedback on academic manuscripts. While the concept is promising, our evaluation revealed a number of significant limitations. We encountered recurring technical issues during both file uploads and Google Docs integration, often requiring multiple attempts

by Rebeka Kiss

Testing the Limits of AI Peer Review: When Even Ian Goodfellow Gets Rejected by OpenReviewer

High-quality feedback is essential for researchers aiming to improve their work and navigate the peer review process more effectively. Ideally, such feedback would be available before formal submission—allowing authors to identify the strengths and weaknesses of their research early on. This is precisely the promise of OpenReviewer, an automated

by Rebeka Kiss

Slide Generation from Scientific Articles: Putting Manus’s New Slide Generator to the Test

In this post, we examine the performance of Manus’s newly updated slide generation tool when applied to a peer-reviewed scientific article. The developers claim recent improvements focused on enhancing the tool’s ability to support academic communication. To test these capabilities, we selected a published study in political science

by Rebeka Kiss

Transforming Academic References into Structured HTML with Mistral Le Chat

Academic writing increasingly relies on consistent, machine-readable formatting—especially when preparing manuscripts for digital publication, automated parsing, or citation indexing. This post demonstrates how the Mistral Le Chat can accurately convert plain-text bibliographic entries into structured HTML, generating both inline (short-form) citations and full bibliographic records with cross-linked anchors. This

by Rebeka Kiss

AI Writing Tools for Researchers: Getting Started with Grammarly and DeepL

Academic writing demands clarity, precision, and often the ability to work across multiple languages. In recent years, AI-powered writing tools have become indispensable aids for researchers looking to improve their manuscripts. Two popular options are Grammarly and DeepL, each offering distinct strengths. Grammarly is known for refining English writing (catching

by Rebeka Kiss

Assessing the FutureHouse Owl Agent’s Ability to Detect Defined Concepts in Academic Research

Following our previous evaluations of the FutureHouse Platform’s research agents this post turns to Owl, the platform’s tool for precedent and concept detection in academic literature. Owl is intended to help researchers determine whether a given concept has already been defined, thereby streamlining theoretical groundwork and avoiding redundant

by Rebeka Kiss

Comparing the FutureHouse Platform’s Falcon Agent and OpenAI’s o3 for Literature Search on Machine Coding for the Comparative Agendas Project

Having previously explored the FutureHouse Platform’s agents in tasks such as identifying tailor-made laws and generating a literature review on legislative backsliding, we now directly compare its Falcon agent and OpenAI’s o3. Our aim was to assess their performance on a focused literature search task: compiling a ranked

by Rebeka Kiss