Updated on May 22, 2024

Cover Image of the blog which is on GPT-4o

Last night, Mira Murati, the CTO of OpenAI, gave us a sneak peek at what the company has been up to and introduced GPT-4o.  Has OpenAI hit the ball out of the park with this one? Will Google and Anthropic, 2 of OpenAI’s biggest competitors, be left biting the dust? We did a deep dive analysis based on the 26-minute live stream of the event, and here are our findings.

Before we start the deep dive, we have covered the evolution of Generative AI ever since the launch of ChatGPT, and you can read more about it in some of our earlier posts:

  1. ChatGPT 3.5 vs 4 – Major Key differences.
  2. GPT-4 Turbo vs. Claude 3 Opus vs. Google Gemini 1.5 Pro
  3. Generative AI – Dawn of A New Era

So far, we have seen the capabilities of LLMs being pushed every day by tech titans, and yesterday, OpenAI decided to push the competition up by a notch.

Introducing GPT-4o

Mira introduced GPT-4o close to 3 minutes into the address, right after she said that OpenAI has now launched a desktop version of ChatGPT.

The “o” in GPT-4o stands for Omni and is a step by OpenAI towards more natural human-computer interactions.

GPT-4o provides GPT-4 level intelligence, but it is much faster, and it improves on its capabilities across text, vision, and audio.

Mira Murati, CTO of OpenAI

Mira Murati at launch event of GPT-4o
Mira Murati, introducing GPT-4o. Image Courtesy: OpenAI

We are looking at the future of interaction between ourselves and the machines, and we think that GPT-4o is really shifting that paradigm into the future of collaboration, where this interaction becomes much more natural and far far easier.

The team at OpenAI has made GPT- 4o multimodal, which means it can take text, voice, and image inputs and process them.

GPT-4o – solving the latency issue, access to the GPT store, and more…

So far, Large Language models like ChatGPT used a combination of Transcription, Intelligence, and Text-To-Speech that came together to deliver Voice mode. However, there was the problem of latency with the earlier models.

OpenAI’s GPT-4o eliminates this problem by reasoning across voice, text, and vision. GPT-4o users will now have access to the same level of intelligence that GPT-4 users had.

GPT-4o can respond to audio inputs as less as 232 milliseconds, with an average of 320 milliseconds.

The users of GPT-4o  users will also have access to the ChatGPT store, where over a million users have already built Custom GPTs that can do everything from designing professional logos to helping research students write their thesis.

Snap of GPT store
GPT Store – Image courtesy: TechCrunch

GPT-4o also uses vision, where users can upload screenshots, images, and photos, and ask ChatGPT specific questions. 

ChatGPT-4o also now supports 50 different languages, meaning 97% of the world’s internet population can now use ChatGPT comfortably.

Paid users of ChatGPT will get the advantage that they get upto 5x more capacity limits than free users. 


In the announcement, Murati said that GPT-4o  is available for commercial use through OpenAI’s API, with speeds up to two times faster than the fastest ChatGPT model available, GPT-4 Turbo.

GPT-4o is also 50% cheaper compared to GPT-4 Turbo, and on the pricing page of OpenAI linked here, and costs just $5 / million tokens, compared to the $10/million tokens of GPT-4 Turbo.

GPT-4o Pricing. Image courtesy: OpenAI

GPT-4o Going Multimodal

One of the coolest features of GPT-4o is its ability to understand and respond to real-time, conversational speech. To demonstrate these capabilities, Murati asked 2 OpenAI researchers to come on stage and have a real conversation with the chatbot.

Open AI researchers using GPT-4o features.
GPT-4o can now understand and respond to voice in real-time, like a human being!! Image courtesy: OpenAI

One of the researchers, Mark Chen, said he was nervous and asked GPT-4o to help him calm down. The response to this from the chatbot was asking Mark to take deep breaths. Mark tried to throw the chatbot off-balance by taking heavy breaths, but ChatGPT- 4o told him that he was “not a vacuum cleaner” and to calm down.

What has changed from the “Voice Mode” that GPT-4 users have become familiar with is that you can now interrupt the model, and butt into the conversation whenever you want. You don’t have to wait for the AI model to finish speaking.

OpenAI’s researchers have given GPT-4o real-time response capabilities, which means you don’t have to undergo that awkward 2-3 second pause between conversations. And finally, GPT-4o understands human emotions(Emotion AI), which is a significant leap in technology by itself.

So what people are using ChatGPT – 4o for?

ChatGPT- 4o use cases

i) As an AI companion

You can talk to ChatGPT like you would to a friend, and it not only expresses emotion, it understands emotion. GPT-4o can use your phone’s camera and guess your location, look at the expression on your face and see if you are happy, angry or sad. GPT-4o can thus be that trustworthy friend that you are looking for, whom you can carry around in your pocket.

Open AI researcher using AI companion feature of GPT-4o
Image credit: OpenAI

ii) Advanced customer service voice support

GPT-4o can now process real-time responses with no delays, and can carry realistic voice conversations. This is super helpful in the customer service industry, where companies that have limited budgets can deploy GPT-4o powered chatbots that can provide superior customer service.

Greg Brockman also showed two ChatGPT-4o powered phones conversing to each other, which can be used to simulate conversations, and preparing for conferences or interviews. 

Open AI researcher using GPT-4o for realistic voice conversations
Enhanced Customer Service with Voice capabilities of GPT — 4o (Video Link https://vimeo.com/945587864)

iii) As a meeting note-taker / Meeting AI

ChatGPT-4o can be used to facilitate a meeting and then, in the end, summarize the entire meeting in the form of voice. Say goodbye to lengthy meetings where no one is talking / there are long, awkward pauses. ChatGPT-4o, once it has access to the screen and all the audio, can direct the conversation and wrap up the meeting.

Feature of Meeting AI, on GPT-4o can summarize the entire meeting in the form of voice.
Image credit: OpenAI

iv) GPT-4o as a tutor – rethinking education

GPT-4o is set to change the way education is delivered. The chatbot can guide students across complex topics, explaining them in a simple manner. It can be tailored to individual student’s needs and is programmed across a wide array of subjects. Learning is thus accessible to all.

Student learning with the help of GPT-4o as shown at launch event.
Learning Redefined by GPT — 4o (Video Link — https://vimeo.com/945587328)

v) Interacting with ChatGPT using video

You can now interact with ChatGPT using video, and Barret demonstrated this feature during the launch as he asked ChatGPT to help him solve a linear equation. The researcher asked ChatGPT not to directly give a solution, but rather help him solve it in a step by step manner.

Researcher interacting with GPT-4o using video.
 Interacting with ChatGPT using Video

The chatbot gave a complete walkthrough of how to solve this equation, answering multiple questions from Barret at various stages of solving the problem.

But this was just a teaser of things to come. Solving linear equations, while impressive, is a simple ask, when compared to say, solving a coding problem.

And ChatGPT- 4o shines here as well!!

vi) Solving a coding problem

Barret whipped out his computer and on his screen was a complex coding problem that he was solving and needed help. He hit “Command +C” and gave ChatGPT the simple voice prompt – “Give me a brief 1 sentence description of what is going on in the code?”

GPT-4o explaining programing code with the help of voice command.
Image credit: OpenAI

Not only was GPT-4o able to describe the code, it also was able to explain what happens when a particular function is added or removed from the code.

It will take even experienced programmers at least a few minutes to provide this response, but here was GPT-4o explaining the code as if it was written by the chatbot itself.

vii) Real-time language translation

Google Translate now has some serious competition, as the team at OpenAI showed off GPT- 4os translation capabilities. Mira Murati spoke to ChatGPT in Italian, and the chatbot was able to easily translate the sentences from Italian to English and vice versa.

viii) Data Analysis

GPT-4o has advanced data analysis skills, with the ability to analyze complex data present in CSV and Excel files and deriving insights from them. Identify trends, discover outliers, do some predictive modelling, or simply navigate through a large and complex dataset using GPT-4o.

GPT-4o analyzing spreadsheet
Image credit: OpenAI

ix) Building games

Coding games used to be one of the most challenging ways programmers tested their skills. But with ChatGPT-4o, looks like everyone can be a video game programmer, with access to a laptop, a decent internet connection, and the chatbot.

An X user by the name of Alvario Cintra, took a screenshot of a simple “Breakout” game and asked ChatGPT to program it in Python. Within seconds, the chatbot was able to give the complete, working code. Super Mario, here we come!!!

An X user asking GPT for a programing code of a game, with the help of screenshot.
Image credit:OpenAI

x) Helping the visually impaired

ChatGPT-4o can analyze and interpret visual data, with the ability to “see” objects and provide insights and information on it. Nope, this is not science fiction. The use cases of this functionality in itself are aplenty, the most important being that it can help the visually impaired to recognize objects. GPT-4o can also aid in medical imaging analysis, detecting anomalies in MRI’s, CT scans and X-Rays with high accuracy.

Visually impaired person getting helped by a feature of GPT-4o
                    Vision Capability of GPT -4o (Video Link — https://vimeo.com/945587840)

While these use cases are impressive in themselves, we are just at the beginning of a revolution. As more and more weeks pass by, we are sure that tech enthusiasts will find new and innovative ways to use ChatGPT.

And the internet reacts..

Hours after the launch, netizens were excited, nervous, and downright skeptical about OpenAI’s new chatbot, with some comparing it to the computer from the Joaquin Phoenix -Scarlett Johannson starrer Her.

Elon Musk famously said that GPT-4o “made him cringe,” but then, Elon has been openly critical of OpenAI (pun intended).

The use cases for the chatbot were many found, from helping differently-abled people communicate to solving the challenge of teacher shortage in third world countries.

Whatever the case, OpenAI will make GPT-4o available to the free tier users in the coming weeks, and we will just have to wait and watch if OpenAI will win the Generative AI war!!

Write A Comment


Devashish Mamgain

I hope you enjoyed reading this blog post.

If you want the Kommunicate team to help you automate your customer support, just book a demo.

Book a Demo

You’ve unlocked 30 days for $0
Kommunicate Offer

Upcoming Webinar: Conversational AI in Fintech with Srinivas Reddy, Co-founder & CTO of TaxBuddy.