GPT-4o scored only 2.7 out of 100 in a 'human ultimate exam', with the top AI model at just 8 points, raising doubts about AI's true capabilities. Traditional tests fail to reflect real performance due to 'benchmark saturation'.....
OpenAI announced the discontinuation of older models including GPT-4o, marking the end of its historical mission. GPT-4o was previously praised for its conversational style and multimodal capabilities, but the company has now shifted its focus to the next-generation flagship model, with GPT-5.2 becoming the preferred choice for users.
OpenAI announced that it will discontinue several early models starting next month, including GPT-4o, which is favored by paying users. The model was launched in May 2024 and was popular with users for its conversational style. Although it was briefly taken offline after the release of GPT-5, it was later restored upon the CEO's promise. The retirement may be due to declining usage, and OpenAI will guide users toward newer models.
India's AI model Alpie excels in GSM8K math and SW engineering benchmarks, outperforming GPT-4o with 32B parameters, hailed as India's 'DeepSeek'.....
A collection of chatbot AI products, including GPT-4o, Gemini, Qwen, Deepseek, Claude & Grok.
Showcases a diverse collection of AI art images and prompts generated by OpenAI's GPT-4o.
Developers can interactively experience the new voice models gpt-4o-transcribe, gpt-4o-mini-transcribe, and gpt-4o-mini-tts in the OpenAI API.
Bailing is a voice dialogue robot similar to GPT-4o, implemented through ASR, LLM, and TTS. It can run on low configurations and supports interruptions.
Openai
$2.8
Input tokens/M
$11.2
Output tokens/M
1k
Context Length
-
Bytedance
$0.8
$2
128
$8.75
$70
400
$1.75
$14
$0.35
Alibaba
64
Tencent
32
$17.5
$56
$0.7
$2.4
$9.6
$525
$1050
Xai
$21
$105
Baidu
$1
$4
Minimax
$8
Stepfun
$38
$120
16
$1.05
$4.2
Chatglm
$100
unsloth
GLM-4-32B-0414 is a large language model with 32 billion parameters, comparable in performance to GPT-4o and DeepSeek-V3. It supports both Chinese and English, and excels in code generation, function calling, and complex task processing.
GLM-4-32B-0414 is a new member of the GLM family with 32 billion parameters, offering performance comparable to GPT-4o and DeepSeek-V3, and supports local deployment.
zai-org
GLM-4-32B-Base-0414 is a new member of the GLM family. It has 32 billion parameters and is pre-trained on 15T high-quality data. Its performance can be comparable to that of advanced models such as GPT-4o and DeepSeek-V3. This model supports convenient local deployment and performs excellently in code generation, function calls, search-based Q&A, etc.
GLM-4-32B-0414 is a new member of the GLM family, a high-performance large language model with 32 billion parameters. This model was pre-trained on 15T of high-quality data, including a large amount of synthetic reasoning data, and performs excellently in multiple task scenarios such as code generation, function calls, and search-based Q&A. Its performance can rival that of larger-scale models like GPT-4o and DeepSeek-V3.
Psychotherapy-LLM
This model is a specialized psychological counseling model fine-tuned through preference learning based on Llama-3.1-8B-Instruct, excelling in psychological counseling sessions with a win rate surpassing GPT-4o.
AtlaAI
Atla Selene Mini is currently the most advanced small judge language model (SLMJ), with performance comparable to models 10 times its size, surpassing GPT-4o in multiple benchmarks.
openbmb
MiniCPM-o 2.6 is a GPT-4o-level multimodal large model that runs on mobile devices, supporting vision, voice, and live stream processing
VITA-MLLM
VITA-1.5 is a multimodal interaction model designed to achieve GPT-4o level real-time vision and voice interaction capabilities.
CISCai
This is the GGUF quantized version of the Qwen2.5-Coder-32B-Instruct model, which uses an advanced importance matrix quantization method to significantly reduce storage and computational resource requirements while maintaining the model's effectiveness. This model is the current state-of-the-art open-source large language model for code, with coding capabilities comparable to GPT-4o.
Sami92
A text classification model fine-tuned based on XLM-R Large, specifically designed to identify factual and non-factual statements in German texts. The model uses weakly supervised learning. It is first trained on a Telegram dataset annotated by GPT-4o and then continues to be trained on a manually annotated dataset, achieving an accuracy of 0.9 on the test set.
ruslandev
A language model fine-tuned based on Meta-Llama-3-8B-Instruct. It improves data quality through GPT-4o and focuses on enhancing Russian language capabilities. In the MT-Bench evaluation, its Russian score exceeds that of GPT-3.5-turbo.
An image generation and editing tool based on the OpenAI GPT-4o/gpt-image-1 model, supporting image generation through text prompts, image editing (such as repair, extension, composition, etc.), and being compatible with multiple MCP clients.
This project is a stdio server based on the Model Context Protocol (MCP), used to forward prompts to OpenAI's ChatGPT (GPT-4o), supporting advanced summarization, analysis, and reasoning functions, suitable for assistant integration in the LangGraph framework.
An image analysis MCP server based on the GPT-4o-mini model that can handle image content analysis from URLs or local paths
A server that interacts with ChatGPT through the MCP protocol for advanced text analysis and reasoning.
An image analysis MCP server based on the GPT-4o-mini model that identifies and describes content by receiving image URLs.
An intelligent chatbot based on Streamlit that uses GPT-4o to automatically route user requests to different tools (such as chatting, image generation, database queries, voice synthesis, etc.), supporting rapid experimentation with AI tool routing functions.