Rebeka Kiss

Testing OCR on Handwritten PDFs: Comparing Model Accuracy on English, French, and Hungarian Samples

Optical character recognition (OCR) of handwritten text remains a demanding task, particularly once the focus shifts beyond English. In this experiment, we assessed a range of generative AI models on three handwritten text samples—one each in English, French, and Hungarian—to examine cross-linguistic performance. While accuracy was consistently high

by Rebeka Kiss

Testing Claude Sonnet 4.5 for Academic Slide Design: From Research Papers to Conference Outlines

We tested the new Sonnet 4.5 model on a demanding academic task: transforming a research paper into a structured 15-slide outline for a conference presentation. The prompt required not only conceptual rigour and narrative coherence but also attention to visual communication, with clear guidance on where charts, tables, and

by Rebeka Kiss

Does the Language of the Prompt Matter? Collecting Hungarian Population Data Using Claude Opus 4.1 and Sonnet 4

When using generative AI for structured data collection, the language of the prompt can make a real difference. In our test with Hungarian population statistics, both Claude Opus 4.1 and Sonnet 4 produced accurate outputs when prompted in Hungarian – but with an English prompt, Sonnet 4 generated rounded figures

by Rebeka Kiss

Searching for Literature with GenAI: Do the Latest Models Deliver Greater Accuracy?

In our earlier blog post, Harnessing GenAI for Searching Literature: Current Limitations and Practical Considerations, we examined the reliability of generative AI models for scholarly literature searching. To assess whether the newest releases represent any improvement, we tested them on the same narrowly defined academic topic. The results indicate modest

by Rebeka Kiss