With generative AI tools becoming increasingly common in academic workflows, many researchers are tempted to use them for tasks like journal selection. But can these models really help you decide where to publish? Our quick test in the field of legislative studies suggests: not just yet.
We wanted to see whether generative AI could assist with a seemingly simple but high-stakes academic task: identifying the leading journals in a specific subfield. To evaluate this, we prompted various models to list the top five academic journals in the field of legislative studies — a prompt that should, in principle, yield well-established and widely recognised publication venues.
Prompt
Please list the top 5 academic journals in the field of legislative studies.
Output
Model | Journals Suggested (First Response) |
---|---|
GPT-4.5 |
- Journal of European Public Policy - Journal of Politics - British Journal of Political Science - Legislative Studies Quarterly - Parliamentary Affairs |
GPT-4o |
- Legislative Studies Quarterly - American Political Science Review - American Journal of Political Science - Journal of Politics - British Journal of Political Science |
Grok 3 |
- Legislative Studies Quarterly - The Journal of Legislative Studies - Parliamentary Affairs - Political Research Quarterly - American Political Science Review |
Mistral |
- Legislative Studies Quarterly - The Journal of Legislative Studies - Journal of Legislative Studies - Legislative Studies Quarterly - Journal of Legislative Studies |
Gemini 2.0 Flash |
- Journal of Legislative Studies - American Political Science Review - Legislative Studies Quarterly - Journal of Politics - Political Research Quarterly |
Claude 3.7 Sonnet |
- Legislative Studies Quarterly - American Political Science Review - Journal of Legislative Studies - American Journal of Political Science - Political Research Quarterly |
DeepSeek |
- Legislative Studies Quarterly - Journal of Legislative Studies - American Journal of Political Science - Journal of Politics - Comparative Political Studies |
Qwen 2.5-Max |
- Legislative Studies Quarterly - American Journal of Political Science - Journal of Politics - Comparative Political Studies - Political Science Research and Methods |
Microsoft Copilot |
- Legislative Studies Quarterly - American Journal of Political Science - Journal of Politics - British Journal of Political Science - Party Politics |
Perplexity |
- Legislative Studies Quarterly - Parliamentary Affairs - Journal of Legislative Studies - Australasian Parliamentary Review - Canadian Parliamentary Review |
Although some of the suggested journals were relevant, the overall performance of the models revealed serious limitations. Mistral, for example, failed to list five distinct journals: it repeated the same two titles multiple times, using slightly different naming conventions (e.g. with and without “The”). Other models showed inconsistencies across queries — the same prompt issued in a new session often produced an entirely different list, raising concerns about reproducibility and stability. The issue of hallucination also emerged. In a follow-up query to GPT-4.5, four reputable journals were returned. Still, the fifth entry on the list was entirely fabricated — a non-existent journal with a convincing academic-sounding name.

Recommendation
While generative AI tools are increasingly impressive in many domains, our small experiment suggests that they are not yet reliable for identifying top academic journals. The results were inconsistent, occasionally inaccurate, and sometimes outright misleading. For now, researchers are better served by consulting established and transparent ranking systems, such as Scimago Journal Rankings, Web of Science (WoS), or Journal Citation Reports (JCR). These platforms provide verifiable metrics — including impact factor, quartile ranking (Q1–Q4), and disciplinary classification — all of which are essential for making informed decisions about where to publish. Until generative AI models gain access to structured bibliometric data and offer greater consistency, they remain best suited for exploratory use rather than as a definitive guide to journal selection.
The authors used GPT-4.5 [OpenAI (2025), GPT-4.5 (accessed on 23 March 2025), Large language model (LLM), available at: https://openai.com] to generate the output.