Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

GPT-4o mini

OpenAI's successor to GPT-3.5 turbo

GPT-4o mini scores 82% on MMLU and currently outperforms GPT-4 on chat preferences. It is priced at 15¢/million input tokens and 60¢/million output tokens, an order of magnitude more affordable than previous frontier models and 60+% cheaper than GPT-3.5 Turbo.

Top comment

Another solid set of updates [for developers] after the GPT-4o release. What's new: GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots). Today, GPT-4o mini supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future. The model has a context window of 128K tokens, supports up to 16K output tokens per request, and has knowledge up to October 2023. Thanks to the improved tokenizer shared with GPT-4o, handling non-English text is now even more cost effective. And also safer: GPT-4o mini in the API is the first model to apply our instruction hierarchy (opens in a new window) method, which helps to improve the model’s ability to resist jailbreaks, prompt injections, and system prompt extractions. This makes the model’s responses more reliable and helps make it safer to use in applications at scale.