Apple introduces the multimodal AI model UniGen 1.5, integrating three major functions of image understanding, generation, and editing within a unified framework, significantly improving efficiency. The model leverages its image understanding capabilities to optimize generation results, achieving technological breakthroughs.
OpenAI upgrades ChatGPT, launching an 'App Directory' to integrate third-party tools, allowing users to directly access services within the chat interface. At the same time, it opens up a Developer SDK, supporting external teams to build deep integration experiences, moving towards a mature AI platform.
Apple's open-source SHARP model converts 2D photos into physically accurate 3D scenes in under a second, significantly boosting 3D content creation efficiency.....
OpenAI integrates ChatGPT with Apple Music, enabling users to create playlists via natural language commands, expanding its application scope.....
TestFlight is a platform provided by Apple to help developers test beta versions of apps.
OneTap changes the way you share content on Apple devices, allowing you to save anything with just one tap for later copying and pasting.
iPhone 16e: Apple's latest iPhone offering excellent performance at an accessible price point. Features the A18 chip and a 48MP Photonic Engine camera.
An intelligent voice assistant app designed for Apple Watch, which can complete various operations without a phone.
mlx-community
This model is an MLX format conversion version of the Ministral-3-3B-Instruct-2512 instruction fine-tuning model released by Mistral AI. It is a large language model with a parameter scale of 3B, specifically optimized for following instructions and dialogue tasks, and supports multiple languages. The MLX format enables it to run efficiently on Apple Silicon devices.
This model is an MLX format conversion version of Kimi-Linear-48B-A3B-Instruct, optimized for Apple Silicon devices such as the Apple Mac Studio. It is a large language model with 48 billion parameters, supporting instruction following and suitable for local inference and conversation tasks.
kyr0
This is an automatic speech recognition model optimized for Apple silicon chip devices. By converting to the MLX framework and quantizing to the FP8 format, it enables fast on-device speech transcription on Apple devices. The model is fine-tuned for verbatim accuracy and is particularly suitable for scenarios requiring high-precision transcription.
This model is an 8-bit quantized version converted from allenai/Olmo-3-7B-Instruct, specifically optimized for the Apple MLX framework. It is a large language model with 7 billion parameters, supporting instruction following and dialogue tasks.
Dogacel
This is an optimized version based on the original DeepSeek-OCR model, specifically an OCR model that supports inference on Apple Metal Performance Shaders (MPS) and CPU. It can extract text from images and convert it into a structured format, supporting multilingual document recognition.
The 4-bit quantized version of VibeThinker-1.5B, optimized for Apple chips based on the MLX framework, is a dense language model with 1.5 billion parameters, specifically designed for mathematical reasoning and algorithm coding problems.
McG-221
This model is an MLX format model converted from summykai/gemma3-27b-abliterated-dpo using version 0.28.3 of mlx-lm. It is a 27B parameter Gemma 3 large language model fine-tuned with DPO (Direct Preference Optimization), optimized for efficient operation on Apple Silicon (MLX framework).
This model is an MLX format conversion version of the instruction fine-tuned version of Falcon-H1-34B-Instruct, optimized specifically for Apple Silicon (M series chips). It is based on the original Falcon-H1-34B-Instruct model and converted to an 8-bit quantization format compatible with the MLX framework through the mlx-lm tool, aiming to achieve efficient local inference on macOS devices.
inferencerlabs
A text generation model implemented based on the MLX library, supporting inference in multiple quantization methods, with distributed computing capabilities, and can run efficiently in the Apple hardware environment.
Marvis-AI
This is a text-to-speech model optimized based on the MLX framework, converted from the original model Marvis-AI/marvis-tts-100m-v0.2. It uses 6-bit quantization technology and is specifically optimized for Apple Silicon hardware, providing efficient speech synthesis capabilities.
This is a 6-bit quantized version converted from the Kimi-Linear-48B-A3B-Instruct model, optimized for the Apple MLX framework. The model retains the powerful instruction-following ability of the original model, while significantly reducing storage and computational requirements through quantization technology, making it suitable for efficient operation on Apple hardware.
This model is based on the Kimi-Linear-48B-A3B-Instruct large language model of moonshotai and is converted into an 8-bit quantized version suitable for Apple Silicon chips (MLX framework) using the mlx-lm tool. It is an instruction fine-tuned model with 48B parameters, designed specifically for following human instructions and dialogue interaction.
lmstudio-community
Qwen3-VL-2B-Thinking is a vision-language model launched by Qwen. Based on a 2B parameter scale, it uses MLX for 8-bit quantization and is specifically optimized for Apple Silicon chips. This model supports multimodal understanding and generation tasks for images and text.
Qwen
Qwen3-VL-2B-Thinking is one of the most powerful vision-language models in the Qwen series. It uses GGUF format weights and supports efficient inference on devices such as CPUs, NVIDIA GPUs, and Apple Silicon. This model has excellent multimodal understanding and reasoning capabilities, especially enhancing visual perception, spatial understanding, and agent interaction functions.
This is a 4-bit quantized version converted from the moonshotai/Kimi-Linear-48B-A3B-Instruct model, optimized for the Apple MLX framework, providing efficient text generation capabilities.
This is the MLX format conversion version of the MiniMax-M2 model, converted from the original model using mlx-lm 0.28.1. It supports 8-bit quantization and an optimized configuration with a group size of 32, and is optimized for running on Apple Silicon devices.
This is the 8-bit quantized version of the MiniMax-M2 model in MLX format, converted from the original model using mlx-lm 0.28.4 and optimized for Apple Silicon devices.
Granite-4.0-H-1B-8bit is a small language model in the IBM Granite series, specifically optimized for Apple Silicon chips. It uses 8-bit quantization technology, has 1B parameters, and features efficient inference and low resource consumption.
MiniMax-M2-6bit is the MLX format conversion version of the MiniMaxAI/MiniMax-M2 model, converted using mlx-lm 0.28.4, and supports efficient operation on Apple Silicon devices.
MiniMax-M2-4bit is a 4-bit quantized version converted from MiniMaxAI/MiniMax-M2 using the mlx-lm tool. It is specifically optimized for Apple Silicon chips and provides efficient text generation capabilities.
The Apple MCP toolset is a collection of native Apple tools based on the MCP protocol, providing integrated services for the Apple ecosystem such as messaging, notes, contacts, email, reminders, and calendar.
An MCP server for querying Apple Health data via SQL, implemented based on DuckDB for efficient analysis, supporting natural language queries and automatic report generation.
A server that provides local Apple Notes database access for the Claude desktop client, supporting reading and searching of note content.
A local server based on the MCP protocol that implements semantic search and RAG functions for Apple Notes, which can be called by AI assistants such as Claude.
Apple Doc MCP is a model context protocol server that provides direct access to Apple developer documentation. It is integrated into the AI programming assistant, supporting intelligent search, framework browsing, and detailed documentation retrieval.
An MCP server that enables LLM applications to interact with macOS through AppleScript, providing a standardized interface to control various system functions.
A service that allows AI assistants to control Apple Shortcuts through the MCP protocol
An MCP service that automatically generates a complete website icon set, supporting the creation of favicons, Apple Touch icons, and web app manifest files in various sizes from PNG images or URLs.
Apple Books MCP is a model context protocol server designed for Apple Books, providing functions such as book management, annotation query, and intelligent analysis.
An MCP server for interacting with macOS Apple calendars, providing a standardized interface for AI models to access and manipulate calendar data.
An audio transcription MCP service based on MLX Whisper, supporting transcription of local files, Base64 audio, and YouTube videos, optimized for Apple M-series chips.
A simple MCP server that can read and save memory information from Apple Notes and supports remote access to Mac data via SSH.
An Apple Music API interaction server based on the MCP protocol, providing song search and playback link generation functions.
VGGT-MPS is a 3D vision reconstruction tool optimized for Apple chips, accelerated by Metal Performance Shaders. It can generate depth maps, camera poses, and 3D point clouds from single or multiple images, and supports sparse attention for city-level reconstruction.
This project provides a server based on the Model Context Protocol (MCP) that supports remote execution of AppleScript and JavaScript automation scripts on macOS. It includes a rich knowledge base of predefined scripts and can control macOS applications and system functions.
The WhatsApp MCP Server is a Node.js-based application that enables programmatic interaction with the WhatsApp desktop version through AppleScript automation, providing message sending and status checking functions.
An MCP server application for running applications on MacOS
The Apple MCP toolset is a native Apple tool collection designed for the MCP protocol, providing various functions such as messages, notes, contacts, emails, reminders, calendars, web search, and maps.
A tool for automatically posting on WeChat Moments on macOS via AppleScript
ACMS is an MCP server that provides programmatic access to Apple's container CLI tools on macOS, supporting local or remote HTTP/S connections and including more than 50 container operation functions.