OpenAI introduces GPT-4o model, promising real-time conversation

OpenAI introduces GPT-4o model, promising real-time conversation
фото показано с : cryptoslate.com

2024-5-14 02:04

OpenAI announced a new model, GPT-4o, which will soon allow real-time conversation with an AI assistant.

In a May 13 demo, OpenAI members showed that the model can provide breathing feedback, tell a story, and help solve a math problem, among other applications.

Head of Frontiers Research Mark Chen noted that, although users could previously access Voice Mode, the new model allows interruptions, no longer has a multi-second delay, and can recognize and communicate in various emotional styles.

OpenAI CEO Sam Altman commented on the update in a separate blog post, calling it the “best computer interface I’ve ever used,” adding that it “feels like AI from the movies.”

He said:

“Getting to human-level response times and expressiveness turns out to be a big change.”

In addition to improved text, video, and visual capabilities, GPT-4o is faster and offers the same level of intelligence as GPT-4.

Full availability pending

Initially, GPT-4o will have limited features, but the model can already understand and discuss images “much better than any existing model.” In one example, OpenAI suggested that the model can examine a menu and provide translations, context, and recommendations.

Each of the company’s subscription models includes different access limits. Starting today, ChatGPT Free users can access the feature with usage limits. GPT-4o to ChatGPT Plus and Team users can also access GPT-4o with five times greater usage limits.

The company also plans to extend the feature to Enterprise users later with “even higher limits.”

OpenAI will introduce the updated “Voice Mode” in the near future. It plans to release an alpha in the coming weeks, with early access for Plus users.

Competitive AI sector

OpenAI’s updates follow other upgrades from competing companies.

In March, Anthropic released an upgrade to Claude that it called superior to OpenAI’s GPT-4. Meta, meanwhile, announced Llama 3 with an improved parameter count in April.

Other industry developments are yet to occur. Google is set to host its I/O conference on May 14, featuring AI within several keynotes. Apple is expected to announce iOS 18 in June, which is expected to include various new AI features.

The post OpenAI introduces GPT-4o model, promising real-time conversation appeared first on CryptoSlate.

Similar to Notcoin - TapSwap on Solana Airdrops In 2024

origin »

Tellurion (TELL) на Currencies.ru

$ 0 (+0.00%)
Объем 24H $0
Изменеия 24h: 0.00 %, 7d: 0.00 %
Cегодня L: $0 - H: $0
Капитализация $0 Rank 99999
Доступно / Всего 0 TELL

model openai real-time conversation gpt-4o breathing feedback

model openai → Результатов: 7


OpenAI разработала модель генерации коротких выжимок из художественных книг

Исследовательская лаборатория OpenAI разработала модель искусственного интеллекта, которая резюмирует книги произвольной длины. Доработанная версия GPT-3 сначала делает выжимки небольших разделов, а затем обобщает их в короткий пересказ.

2021-9-24 12:29


В Китае разработали языковую модель, которая больше GPT-3 в десять раз

Пекинская академия искусственного интеллекта представила новую языковую модель WuDao 2.0. По словам разработчиков, она превосходит аналогичные технологии Google и OpenAI. The post В Китае разработали языковую модель, которая больше GPT-3 в десять раз first appeared on ForkLog.

2021-6-3 15:40


OpenAI: языковая модель GPT-3 ежедневно генерирует 4,5 млрд слов

Языковая модель GPT-3, созданная некоммерческой компанией OpenAI, каждый день генерирует 4,5 млрд слов. Об этом сообщили разработчики. https://twitter. com/OpenAI/status/1375164572613545985 В OpenAI также отметили, что модель достигла таких результатов благодаря более чем 300 приложениям и десяткам тысяч разработчиков.

2021-3-30 15:22