GPT-5.2

Exploring Temperature Settings in Creative Writing: A Haiku Generation Case Study with OpenAI API

Every time you prompt a large language model, you're witnessing the result of thousands of sequential sampling decisions—each token drawn from a probability distribution over the model's vocabulary. In standard chat interfaces, these distributions are sampled with fixed settings that users cannot adjust. But API

Understanding Model Confidence Through Log Probabilities: A Practical OpenAI API Implementation

Log probabilities (logprobs) provide a window into how confident a language model is about its predictions. In this technical implementation, we demonstrate how to access and interpret logprobs via the OpenAI API, using a series of increasingly difficult multiplication tasks. Our experiment reveals that declining confidence scores can effectively signal

Building Intelligent Tool Use with the OpenAI API: A Practical Implementation Guide

Tool use (also called function calling) is one of the most powerful capabilities available in modern language models. It allows models to extend beyond text generation by interacting with external systems, databases, or custom logic—making AI agents capable of real-world tasks like checking weather, querying APIs, or running calculations.