recommendations

recommendations

No-Code Transformation of the NCBI Disease Corpus into a Structured CSV

Working with biomedical corpora often requires programming skills, specialised formats, and time-consuming preprocessing. But what if you could transform a complex annotated dataset—like the NCBI Disease Corpus—into a structured, analysis-ready CSV using nothing more than a single, well-designed prompt? In this post, we demonstrate how a no-code, GenAI-powered

LLM Parameters Explained: A Practical, Research-Oriented Guide with Examples

Large language models (LLMs) rely on a set of parameters that directly influence how text is generated — affecting randomness, repetition, length, and coherence. Understanding of these parameters is essential when working with LLMs in research, application development, or evaluation settings. While chat-based interfaces such as ChatGPT, Copilot, or Gemini typically

Generative AI in Academic Publishing: Tools and Strategies from Elsevier, Springer Nature and Wiley

As generative AI technologies continue to transform the research landscape, major academic publishers are beginning to integrate AI-powered tools directly into their platforms. These tools aim to support researchers at various stages of the scientific workflow – from literature discovery and summarisation to writing assistance and experimental comparison. This article provides

Comparing the FutureHouse Platform’s Falcon Agent and OpenAI’s o3 for Literature Search on Machine Coding for the Comparative Agendas Project

Having previously explored the FutureHouse Platform’s agents in tasks such as identifying tailor-made laws and generating a literature review on legislative backsliding, we now directly compare its Falcon agent and OpenAI’s o3. Our aim was to assess their performance on a focused literature search task: compiling a ranked

Building a Retrieval-Augmented Generation (RAG) System for Domain-Specific Document Querying

In recent years, Retrieval-Augmented Generation (RAG) has emerged as a powerful method for enhancing large language models with structured access to external document collections. By combining dense semantic search with contextual text generation, RAG systems have proven particularly useful for tasks such as answering questions based on extensive documentation, enabling

Introducing Horizon Navigator '25: A Custom GPT by poltextLAB for Smarter Access to EU Funding Information

Navigating Horizon Europe’s 2025 Work Programme means dealing with twelve separate documents, each several hundred pages long. These include funding calls, eligibility conditions, strategic priorities, and legal annexes—making it difficult to locate critical information quickly. To address this challenge, poltextLAB developed a domain-specific Custom GPT (Horizon Navigator '

Solving Health Insurance Demand and Social Loss Models Using Manus AI

Can a large language model accurately solve a university-level microeconomics exercise on health insurance? We tested Manus AI on a multi-part problem involving demand curves, list prices, out-of-pocket prices, and the calculation of social loss under various insurance schemes – including full insurance, coinsurance, and copayment plans. Not only did the

Introducing poltextLAB QuantiCheck: A Custom GPT for Evaluating Quantitative Research Rigour

In our earlier posts, we explored how useful existing Custom GPTs are for academic tasks and explained how to create your own GPT from scratch. This follow-up post puts those insights into practice by introducing QuantiCheck—a Custom GPT we developed specifically to assess the methodological rigour and reproducibility of

Thematic Content Analysis of Martin Luther King Jr.'s "I Have a Dream" Speech Using Grok 3

Thematic content analysis is a key method in political text interpretation, but it typically requires human judgement to define categories and trace meaning. In this study, we explored whether Grok 3, a state-of-the-art large language model, can carry out this task autonomously—without predefined themes or external guidance. Using Martin

Zero-Shot PRO/CON Classification: DeepSeek Achieved 100% Accuracy in Labelling Claims

Can a language model accurately classify argumentative claims without any prior examples or fine-tuning? We put DeepSeek-V3 to the test on a real-world stance classification task involving 200 claims from a structured dataset. The model was asked to determine, for each claim, whether it supported (PRO) or opposed (CON) a