Apple accelerates AI strategy by integrating third-party tools like ChatGPT into Siri, testing Google Gemini, and exploring partnerships with Anthropic and Perplexity AI.....
Apple CEO Cook announced plans to integrate more third-party AI tools like ChatGPT into its OS, with potential partnerships including Google Gemini, Anthropic, and Perplexity. An upgraded Siri with expanded AI capabilities will launch next year.....
OpenAI acquires Software Applications and its unreleased macOS software Sky, a personal assistant that observes screens to automate tasks like writing and coding. The team previously created Workflow, later Apple's Shortcuts.....
Apple is hiring experts in reasoning models to address major LLM flaws, focusing on developing new architectures for enhanced reasoning, planning, tool use, and agent-based capabilities.....
TestFlight is a platform provided by Apple to help developers test beta versions of apps.
OneTap changes the way you share content on Apple devices, allowing you to save anything with just one tap for later copying and pasting.
iPhone 16e: Apple's latest iPhone offering excellent performance at an accessible price point. Features the A18 chip and a 48MP Photonic Engine camera.
An intelligent voice assistant app designed for Apple Watch, which can complete various operations without a phone.
mlx-community
This is the MLX format conversion version of the MiniMax-M2 model, converted from the original model using mlx-lm 0.28.1. It supports 8-bit quantization and an optimized configuration with a group size of 32, and is optimized for running on Apple Silicon devices.
This is the 8-bit quantized version of the MiniMax-M2 model in MLX format, converted from the original model using mlx-lm 0.28.4 and optimized for Apple Silicon devices.
MiniMax-M2-6bit is the MLX format conversion version of the MiniMaxAI/MiniMax-M2 model, converted using mlx-lm 0.28.4, and supports efficient operation on Apple Silicon devices.
MiniMax-M2-4bit is a 4-bit quantized version converted from MiniMaxAI/MiniMax-M2 using the mlx-lm tool. It is specifically optimized for Apple Silicon chips and provides efficient text generation capabilities.
DeepSeek-OCR-8bit is an MLX format version converted from the DeepSeek-OCR model. It is a visual language model specifically optimized for Apple chips, supporting multilingual OCR recognition and image text understanding tasks.
nightmedia
This is the MLX format conversion version of the Qwen3-Next-80B-A3B-Instruct model, optimized for efficient operation on Apple Silicon devices. The model is a large language model with 80 billion parameters, supporting text generation tasks and having powerful dialogue and reasoning capabilities.
Wwayu
This is a mixture-of-experts model based on the GLM-4.6 architecture. The REAP method was used to uniformly prune 40% of the experts. The model has 218B parameters and has been converted to a 3-bit quantization version in the MLX format, which is suitable for efficient operation on Apple chip devices.
lmstudio-community
Qwen3-VL-2B-Instruct is an efficient image-text-to-text model developed by the Qwen team. This model is optimized with MLX 8-bit quantization and is particularly suitable for Apple Silicon devices. It can handle visual language tasks and provide efficient solutions.
manasmisra
This model is a mixture of experts model after 25% uniform pruning of GLM-4.5-Air using the REAP method. It has been converted to a 4-bit quantized version in the MLX format, suitable for efficient inference on Apple chip devices.
This is an MLX format version converted from the Qwen3-Coder-REAP-25B-A3B model, converted from the original model using the mlx-lm tool, specifically optimized for Apple Silicon chips, and supports efficient text generation tasks.
An optimized version based on the GLM-4.5-Air model with 25% expert pruning using the REAP method, converted to the MLX format for efficient operation on Apple Silicon devices
This is the 4-bit quantized version of the Qwen3-VL-4B-Instruct model, specifically optimized for Apple Silicon chips and converted using the MLX framework. This model is a vision-language model that supports image understanding and multimodal dialogue tasks.
Qwen3-VL-8B-Instruct is a multimodal vision-language model developed by the Qwen team, which supports the conversion from image-text to text. This version has undergone MLX 8-bit quantization processing and is specifically optimized for Apple silicon chips, improving the operating efficiency while maintaining performance.
Qwen3-VL-8B-Instruct is a visual-language model developed by Qwen, optimized through MLX quantization, and specifically designed for Apple Silicon devices. This model supports multi-modal input of images and text, and can understand and generate text content related to images.
LFM2-8B-A1B is an 8-bit quantized MLX build version optimized for Apple Silicon chips, adopting the Mixture of Experts (MoE) architecture. It has approximately 8 billion total parameters, with about 1 billion parameters activated per token, and supports fast inference on the device side.
ethicalabs
ethicalabs/granite-4.0-h-small-base-MLX is an MLX format version converted from the IBM Granite-4.0-H-Small-Base model, specifically optimized for the Apple MLX framework to provide efficient text generation capabilities.
The 5-bit quantized version of the IBM Granite-4.0-H-Tiny model, optimized for Apple silicon chips. It uses a hybrid architecture of Mamba-2 and soft attention, combined with a Mixture of Experts model (MoE), to achieve efficient inference while maintaining high quality.
IBM Granite-4.0-H-Tiny is a hybrid Mamba-2/Transformer model optimized for Apple silicon chips, using 3-bit quantization technology, and is designed for long context, efficient inference, and enterprise use. This model combines the Mamba-2 architecture with the Mixture of Experts (MoE) technology, significantly reducing memory usage while maintaining expressiveness.
This is a 4-bit quantized version of the IBM Granite-4.0-h-Tiny model, optimized for Apple Silicon and using the MLX framework for efficient inference. The model has been processed with DWQ (Dynamic Weight Quantization) to significantly reduce the model size while maintaining performance.
This is the 8-bit quantized version of the IBM Granite-4.0-H-Micro model in MLX format, optimized for Apple Silicon, providing efficient large language model inference capabilities.
The Apple MCP toolset is a collection of native Apple tools based on the MCP protocol, providing integrated services for the Apple ecosystem such as messaging, notes, contacts, email, reminders, and calendar.
An MCP server for querying Apple Health data via SQL, implemented based on DuckDB for efficient analysis, supporting natural language queries and automatic report generation.
A server that provides local Apple Notes database access for the Claude desktop client, supporting reading and searching of note content.
A local server based on the MCP protocol that implements semantic search and RAG functions for Apple Notes, which can be called by AI assistants such as Claude.
Apple Doc MCP is a model context protocol server that provides direct access to Apple developer documentation. It is integrated into the AI programming assistant, supporting intelligent search, framework browsing, and detailed documentation retrieval.
An MCP server that enables LLM applications to interact with macOS through AppleScript, providing a standardized interface to control various system functions.
A service that allows AI assistants to control Apple Shortcuts through the MCP protocol
An MCP server for interacting with macOS Apple calendars, providing a standardized interface for AI models to access and manipulate calendar data.
An MCP service that automatically generates a complete website icon set, supporting the creation of favicons, Apple Touch icons, and web app manifest files in various sizes from PNG images or URLs.
Apple Books MCP is a model context protocol server designed for Apple Books, providing functions such as book management, annotation query, and intelligent analysis.
An audio transcription MCP service based on MLX Whisper, supporting transcription of local files, Base64 audio, and YouTube videos, optimized for Apple M-series chips.
A simple MCP server that can read and save memory information from Apple Notes and supports remote access to Mac data via SSH.
An Apple Music API interaction server based on the MCP protocol, providing song search and playback link generation functions.
The WhatsApp MCP Server is a Node.js-based application that enables programmatic interaction with the WhatsApp desktop version through AppleScript automation, providing message sending and status checking functions.
VGGT-MPS is a 3D vision reconstruction tool optimized for Apple chips, accelerated by Metal Performance Shaders. It can generate depth maps, camera poses, and 3D point clouds from single or multiple images, and supports sparse attention for city-level reconstruction.
This project provides a server based on the Model Context Protocol (MCP) that supports remote execution of AppleScript and JavaScript automation scripts on macOS. It includes a rich knowledge base of predefined scripts and can control macOS applications and system functions.
An MCP server application for running applications on MacOS
The Apple MCP toolset is a native Apple tool collection designed for the MCP protocol, providing various functions such as messages, notes, contacts, emails, reminders, calendars, web search, and maps.
A tool for automatically posting on WeChat Moments on macOS via AppleScript
Apple Developer Documentation MCP service provides AI programming assistants with the ability to directly access Apple's official development documentation, supporting intelligent search, framework browsing, and detailed documentation retrieval.