PROMPT REVOLUTION

Hands-On Prompt-Tutorials for Using GenAI in Research and Education

AI Writing Tools for Researchers: Getting Started with Grammarly and DeepL

Academic writing demands clarity, precision, and often the ability to work across multiple languages. In recent years, AI-powered writing tools have become indispensable aids for researchers looking to improve their manuscripts. Two popular options are Grammarly and DeepL, each offering distinct strengths. Grammarly is known for refining English writing (catching

Assessing the FutureHouse Owl Agent’s Ability to Detect Defined Concepts in Academic Research

Following our previous evaluations of the FutureHouse Platform’s research agents this post turns to Owl, the platform’s tool for precedent and concept detection in academic literature. Owl is intended to help researchers determine whether a given concept has already been defined, thereby streamlining theoretical groundwork and avoiding redundant

Comparing the FutureHouse Platform’s Falcon Agent and OpenAI’s o3 for Literature Search on Machine Coding for the Comparative Agendas Project

Having previously explored the FutureHouse Platform’s agents in tasks such as identifying tailor-made laws and generating a literature review on legislative backsliding, we now directly compare its Falcon agent and OpenAI’s o3. Our aim was to assess their performance on a focused literature search task: compiling a ranked

Using Falcon for Writing a Literature Review on the FutureHouse Platform: Useful for Broad Topics, Not for Niche Concepts

The FutureHouse Platform, launched in May 2025, is a domain-specific AI environment designed to support various stages of scientific research. It provides researchers with access to four specialised agents — each tailored to a particular task in the knowledge production pipeline: concise information retrieval (Crow), deep literature synthesis (Falcon), precedent detection

Human- or AI-Generated Text? What AI Detection Tools Still Can’t Tell Us About the Originality of Written Content

Can we truly distinguish between text produced by artificial intelligence and that written by a human author? As large language models become increasingly sophisticated, the boundary between machine-generated and human-crafted writing is growing ever more elusive. Although a range of detection tools claim to identify AI-generated text with high precision,

Can AI Really Accelerate Scientific Discovery? A First Look at the FutureHouse Platform

As scientific research became increasingly data-intensive and fragmented across disciplines, the limitations of traditional research workflows became more apparent. In response to these structural challenges, FutureHouse — a nonprofit backed by Eric Schmidt — launched a platform in May 2025 featuring four specialised AI agents. Designed to support literature analysis, hypothesis development,

Building a Retrieval-Augmented Generation (RAG) System for Domain-Specific Document Querying

In recent years, Retrieval-Augmented Generation (RAG) has emerged as a powerful method for enhancing large language models with structured access to external document collections. By combining dense semantic search with contextual text generation, RAG systems have proven particularly useful for tasks such as answering questions based on extensive documentation, enabling

Introducing Horizon Navigator '25: A Custom GPT by poltextLAB for Smarter Access to EU Funding Information

Navigating Horizon Europe’s 2025 Work Programme means dealing with twelve separate documents, each several hundred pages long. These include funding calls, eligibility conditions, strategic priorities, and legal annexes—making it difficult to locate critical information quickly. To address this challenge, poltextLAB developed a domain-specific Custom GPT (Horizon Navigator '

Solving Health Insurance Demand and Social Loss Models Using Manus AI

Can a large language model accurately solve a university-level microeconomics exercise on health insurance? We tested Manus AI on a multi-part problem involving demand curves, list prices, out-of-pocket prices, and the calculation of social loss under various insurance schemes – including full insurance, coinsurance, and copayment plans. Not only did the

Introducing poltextLAB QuantiCheck: A Custom GPT for Evaluating Quantitative Research Rigour

In our earlier posts, we explored how useful existing Custom GPTs are for academic tasks and explained how to create your own GPT from scratch. This follow-up post puts those insights into practice by introducing QuantiCheck—a Custom GPT we developed specifically to assess the methodological rigour and reproducibility of