OpenAI veteran Joanne Jang, a key early member involved in GPT-4 and DALL-E 2, has resigned. Her departure may impact product strategy and IPO plans.....
OpenAI executive Joanne Jang, key to GPT-4o and ChatGPT's development, has resigned after 4.5 years, potentially impacting the company's IPO.....
Apple introduces two ML studies: SQUIRE enhances AI-generated UI control and fine-tuning with GPT-4o and slot query representation, while another improves image safety review to address current tech challenges.....
The Alibaba Qwen 3.5 series small model breaks the conventional belief that parameter count determines intelligence. Among them, Qwen 3.5-4B with only 4 billion parameters performs equally well or even slightly better than GPT-4o, which has over 100 billion parameters, in third-party tests. This marks an important breakthrough in local deployment and efficiency optimization for domestic large models, ushering in a new era of 'winning with small size'.
A collection of chatbot AI products, including GPT-4o, Gemini, Qwen, Deepseek, Claude & Grok.
Showcases a diverse collection of AI art images and prompts generated by OpenAI's GPT-4o.
Developers can interactively experience the new voice models gpt-4o-transcribe, gpt-4o-mini-transcribe, and gpt-4o-mini-tts in the OpenAI API.
Bailing is a voice dialogue robot similar to GPT-4o, implemented through ASR, LLM, and TTS. It can run on low configurations and supports interruptions.
Openai
$2.8
Input tokens/M
$11.2
Output tokens/M
1k
Context Length
-
Bytedance
$0.8
$2
128
$8.75
$70
400
$1.75
$14
$0.35
Alibaba
64
Tencent
32
$17.5
$56
$0.7
$2.4
$9.6
$525
$1050
Xai
$21
$105
Baidu
$1
$4
Minimax
$8
Stepfun
$38
$120
16
$1.05
$4.2
Chatglm
$100
unsloth
GLM-4-32B-0414 is a large language model with 32 billion parameters, comparable in performance to GPT-4o and DeepSeek-V3. It supports both Chinese and English, and excels in code generation, function calling, and complex task processing.
GLM-4-32B-0414 is a new member of the GLM family with 32 billion parameters, offering performance comparable to GPT-4o and DeepSeek-V3, and supports local deployment.
zai-org
GLM-4-32B-Base-0414 is a new member of the GLM family. It has 32 billion parameters and is pre-trained on 15T high-quality data. Its performance can be comparable to that of advanced models such as GPT-4o and DeepSeek-V3. This model supports convenient local deployment and performs excellently in code generation, function calls, search-based Q&A, etc.
GLM-4-32B-0414 is a new member of the GLM family, a high-performance large language model with 32 billion parameters. This model was pre-trained on 15T of high-quality data, including a large amount of synthetic reasoning data, and performs excellently in multiple task scenarios such as code generation, function calls, and search-based Q&A. Its performance can rival that of larger-scale models like GPT-4o and DeepSeek-V3.
Psychotherapy-LLM
This model is a specialized psychological counseling model fine-tuned through preference learning based on Llama-3.1-8B-Instruct, excelling in psychological counseling sessions with a win rate surpassing GPT-4o.
AtlaAI
Atla Selene Mini is currently the most advanced small judge language model (SLMJ), with performance comparable to models 10 times its size, surpassing GPT-4o in multiple benchmarks.
openbmb
MiniCPM-o 2.6 is a GPT-4o-level multimodal large model that runs on mobile devices, supporting vision, voice, and live stream processing
VITA-MLLM
VITA-1.5 is a multimodal interaction model designed to achieve GPT-4o level real-time vision and voice interaction capabilities.
CISCai
This is the GGUF quantized version of the Qwen2.5-Coder-32B-Instruct model, which uses an advanced importance matrix quantization method to significantly reduce storage and computational resource requirements while maintaining the model's effectiveness. This model is the current state-of-the-art open-source large language model for code, with coding capabilities comparable to GPT-4o.
Sami92
A text classification model fine-tuned based on XLM-R Large, specifically designed to identify factual and non-factual statements in German texts. The model uses weakly supervised learning. It is first trained on a Telegram dataset annotated by GPT-4o and then continues to be trained on a manually annotated dataset, achieving an accuracy of 0.9 on the test set.
ruslandev
A language model fine-tuned based on Meta-Llama-3-8B-Instruct. It improves data quality through GPT-4o and focuses on enhancing Russian language capabilities. In the MT-Bench evaluation, its Russian score exceeds that of GPT-3.5-turbo.
An image generation and editing tool based on the OpenAI GPT-4o/gpt-image-1 model, supporting image generation through text prompts, image editing (such as repair, extension, composition, etc.), and being compatible with multiple MCP clients.
This project is a stdio server based on the Model Context Protocol (MCP), used to forward prompts to OpenAI's ChatGPT (GPT-4o), supporting advanced summarization, analysis, and reasoning functions, suitable for assistant integration in the LangGraph framework.
An image analysis MCP server based on the GPT-4o-mini model that can handle image content analysis from URLs or local paths
A server that interacts with ChatGPT through the MCP protocol for advanced text analysis and reasoning.
An image analysis MCP server based on the GPT-4o-mini model that identifies and describes content by receiving image URLs.
An intelligent chatbot based on Streamlit that uses GPT-4o to automatically route user requests to different tools (such as chatting, image generation, database queries, voice synthesis, etc.), supporting rapid experimentation with AI tool routing functions.