OpenAI, which has made a great contribution to the development of AI chatbots and large language models, announced its most advanced and newest model, the GPT-4o, on May 13, 2024. The GPT-4o model has higher performance and speed than its predecessors and a greater variety of uses. The GPT-4o model provides a much faster response by solving the speed problem, which was the biggest drawback of its predecessors.

In this article, we will examine the GPT-4o model and compare it with GPT-4.

If you're ready, let's start!


  • The GPT-4o model was announced by OpenAI on May 13, 2024, and is an AI multimodal.
  • With GPT-4o, you can quickly process textual, visual, and audio input and generate output.
  • The GPT-4o model is faster and more wallet-friendly than its predecessor, the GPT-4 model.
  • The GPT-4o model outperformed the GPT-4 model in benchmarks such as MMLU and HumanEval.
  • Unlike the GPT-4 model, the GPT-4o model has vision capabilities.
  • The GPT-4o model was trained with online data until October 2023 and, unlike GPT-4, does not have a web access feature.
  • If you are looking for a multifunctional AI assistant that allows you to experience the GPT-4o model, ZenoChat by TextCortex is designed for you.

What is GPT-4o?

GPT-4o model is OpenAI's latest and most advanced AI model, built on the GPT-4 Turbo model and announced on May 13, 2024. The GPT-4o model also has higher performance in areas such as output speed, quality of answers, and supported languages, which its predecessor lacked. The GPT-4o model can generate higher quality, grammatically correct and concise output not only in English but also in non-English languages.

What’s New in GPT-4o?

The biggest difference of the GPT-4o model from its predecessors is that it uses a single neural network instead of separate neural networks to process different types of input data. In this way, unlike its predecessors, the GPT-4o model can detect background noises, multi speakers and emotional tones in the inputs and add them to the output generation process.

How to Access GPT-4o?

If you have a ChatGPT account, you can access the GPT-4o model freely. To ensure that the GPT-4o model can be experienced by all users, OpenAI has made it available for both Free and Plus users. However, if you have a ChatGPT Plus membership, you can experience the GPT-4o model 5 times more.

how to access GPT-4o

The customizable and sophisticated way to access the GPT-4o model is to experience it through ZenoChat. ZenoChat is a conversational AI developed by TextCortex that offers its users advanced AI features, templates, and different large language models. With ZenoChat, you can experience both the GPT-4o model and models such as Claude 3 Opus and Sophos-2.

GPT-4o Pricing (API)

You do not need to pay any fee to access and experience the GPT-4o model. ChatGPT offers a GPT-4o model with limited creation as its default model. However, if you want to use the GPT-4o model 5 times more, you need to purchase its Plus subscription, which charges $20 per month.

GPT-4o Pricing

If you want to use the GPT-4o model as an API, you pay half as much as OpenAI's previous most advanced model, GPT-4T. In addition, the GPT-4o model is 2 times faster than the GPT-4T. While the GPT-4o model charges $5 for every 1 million input tokens, it charges $15 for every 1 million output tokens.

OpenAI’s GPT-4 vs. GPT-4o Comparison

OpenAI's GPT-4o model was announced with exciting features and managed to outperform its predecessor, GPT-4, in most benchmarks. Let's compare the GPT-4 and GPT-4o models and discover their similarities and differences.


According to OpenAI's GPT-4o article, the GPT-4o model supports both GPT-4 and GPT-4T models with MMLU (88.7%), GPQA (53.6%), MATH (76.6%), HumanEval (90.2%), MGSM (90.5%) managed to outperform its benchmarks. For example, the GPT-4o model has a score of 53.6% in the GPQA benchmark, while its predecessor, the GPT-4 model, has a score of 35.7%.

GPT-4 vs GPT-4o

In addition, although the GPT-4o model managed to outperform its predecessor, the GPT-4 model, with a score of 83.4% in the DROP benchmark, it fell behind the GPT-4 Turbo model with a score of 86.0%. In other words, the GPT-4T model has higher performance than GPT-4o in advanced coding and reasoning tasks.

Multilingual Tasks

Another point where the GPT-4o model outperforms its predecessor, the GPT-4, and makes up for its shortcomings is multilingual tasks. The GPT-4o model has been trained by OpenAI to have higher performance and more concise output generation in non-English tasks. The GPT-4o model has higher performance in both multilingual and vision, especially in Afrikaans, Chinese, Italian, Javanese, and Portuguese. In other words, the GPT-4o model can process non-English languages in both text and images at higher performance than GPT-4.

gpt-4 multilingualism

Vision Capabilities

Since the GPT-4 model does not have any Vision capabilities, the GPT-4o model is a better choice for visual tasks than GPT-4. However, GPT-4o has higher vision understanding, processing and analysis performance than GPT-4T, which is OpenAI's large language model with vision capabilities. Moreover, the GPT-4o model can process visual inputs much faster and generate related output than the GPT-4 Turbo model.

GPT-4o vision capabilities

Output Speed and Rate Limits

The biggest problem with the GPT-4 model is its lower output speed compared to the Claude 3 Opus and Gemini Ultra models. The GPT-4o model managed to outperform both rival models and the GPT-4 model by generating a 488-word answer in 12 seconds. The GPT-4 model needs approximately 1 minute and 10 seconds to generate an output of 488 words. Moreover, the GPT-4 Turbo model, which stands out with its high speed, needs 24 seconds to generate an output of 488 words.

The GPT-4o model is a new benchmark not only in textual but also in voice output speed. The GPT-4o model provides audio output in 320 seconds. A normal person pauses for 250 milliseconds to answer in English. That makes the GPT-4o model an artificial intelligence that speaks faster and more fluently than humans.

Training Data and Web Access

One of the only points where the GPT-4 model is better than GPT-4o is its web access. The GPT-4o model currently has a 128K context window and publicly accessible online data until October 2023. In other words, the GPT-4o model cannot generate output about current issues. That makes GPT-4o useless for marketing, SEO, and research related tasks.

A Better Way to Use GPT-4o: ZenoChat

If you are looking for an AI assistant that can both access the GPT-4o model and combine it with advanced AI features, ZenoChat by TextCortex is designed for you. With its advanced AI features, various large language models (including GPT-4o) and AI templates, ZenoChat aims to reduce the workload of its users in both daily and professional tasks and boost their efficiency. ZenoChat is available as a web application and browser extension. The TextCortex browser extension is integrated with 30,000+ websites and apps to be your pocket support with powerful LLMs under the hood.

How to Use GPT-4o via ZenoChat?

Accessing the GPT-4o model with ZenoChat is a straightforward and simple process. Simply create your free TextCortex account, head to the TextCortex web application, click ZenoChat from the left menu and select GPT-4o as LLM from chat settings. Additionally, the large language models you can use through ZenoChat are;

  • GPT-4o
  • Claude 3 Opus
  • Claude 3 Sonnet
  • Claude 3 Haiku
  • GPT-4
  • Sophos 2
  • Llama 3
  • Mixtral