Meta announced that the AI photo editing suggestions feature on Facebook is now fully available in the United States and Canada. The feature can access users' unshared photos in their camera roll and provide editing suggestions, encouraging users to post AI-optimized images to their feed or Stories. It was tested this summer, and users received a request to allow cloud processing permissions when opening the app, in order to enable personalized creative recommendations.
Meta announced that starting December 16, 2025, all text or voice conversations between users and Meta AI will be integrated into its advertising and content algorithms. This means that interactions in AI chats will directly influence the ads, posts, and group content that users see on platforms such as Facebook and Instagram. For example, after discussing hiking, users will receive more related ads and content in their feeds.
Meta introduces AI assistant for Facebook Dating, offering personalized matches (e.g., 'tech women in Brooklyn') and profile optimization to boost success rates and reduce dating fatigue.....
Meta introduces an AI assistant to Facebook Dating, driving an AI revolution in the love market. The assistant is not only a matching tool but also a personal love advisor, capable of accurately understanding user needs. Users can specify specific dating requirements (such as 'a tech girl from Brooklyn') or request improvements to their profiles, and the AI will provide personalized services through complex algorithms, completely changing the way people seek love.
Discover, save, download, and generate ads that perform well on platforms such as TikTok, Facebook, and YouTube.
Outsoci is the ultimate lead generation tool for businesses and agencies, which can legally extract and collect emails from Facebook, Instagram, TikTok, LinkedIn, YouTube, Google Maps, Reddit, and ProductHunt.
JarveePro is a multi - platform social media management and automation tool that supports platforms such as Instagram, Facebook, and YouTube.
Add an AI chatbot to your website to communicate with visitors via WhatsApp, Facebook Messenger, contact forms, etc.
woodBorjo
This model is an instance segmentation model fine-tuned on the qubvel-hf/ade20k-mini dataset based on facebook/mask2former-swin-tiny-coco-instance. This model is specifically optimized for the scene understanding task in the ADE20K-mini dataset and shows good performance in the instance segmentation task.
sahirp
This model is a fine-tuned version of Facebook's DETR-ResNet-50-DC5 object detection model on a fashion dataset, specifically designed for fashion item detection and classification. The model is optimized on the Fashionpedia dataset and can identify fashion items such as clothing and accessories.
facebook
MobileLLM-Pro is a 1-billion-parameter efficient device-side language model launched by Meta, optimized for mobile devices, supporting a context length of 128k and providing high-quality inference capabilities. This model is trained using knowledge distillation technology, outperforming models of the same scale in multiple benchmark tests, and supporting nearly lossless 4-bit quantization.
MobileLLM-R1 is an efficient inference model series released by Meta, including three scales of 140M, 360M, and 950M. This model is specifically optimized for mathematical, programming, and scientific problems, achieving comparable or even better performance than large-scale models with a smaller parameter scale.
MobileLLM-R1 is a series of efficient inference language models released by Meta, focusing on solving mathematical, programming, and scientific problems. This model can still achieve excellent performance with a relatively small parameter scale, and provides a complete training recipe and data sources to support reproducible research.
MobileLLM-R1 is a series of efficient inference models launched by Facebook, focusing on solving mathematical, programming, and scientific problems. This model has achieved excellent performance in multiple benchmark tests with only about 2T high-quality labels for pre-training.
MobileLLM-R1 is a series of efficient inference models focused on mathematics, programming, and scientific problems, achieving excellent performance with less training data and providing a complete training recipe and data source.
MobileLLM-R1 is a series of efficient reasoning models launched by Meta, focusing on solving mathematical, programming, and scientific problems. The model offers three scale versions of 140M, 360M, and 950M, with excellent reasoning ability and reproducibility.
MobileLLM-R1 is an efficient inference model in the MobileLLM series, specifically optimized for mathematics, programming, and scientific problems. It achieves higher accuracy with a smaller parameter scale, featuring low training cost and high efficiency.
MapAnything is an end-to-end trained Transformer model that can take multiple modalities as input and directly regress the decomposed metric 3D geometric structure of the scene. This model supports more than 12 different 3D reconstruction tasks, including multi-image SfM, multi-view stereo vision, and monocular metric depth estimation.
DINOv3 is a series of general-purpose visual foundation models that can outperform specialized state-of-the-art techniques on a wide range of visual tasks without fine-tuning. This model can generate high-quality dense features and performs excellently in various visual tasks, significantly surpassing previous self-supervised and weakly-supervised foundation models.
DINOv3 is a series of general visual foundation models developed by Meta AI. Without fine-tuning, it can outperform specialized state-of-the-art models in a wide range of visual tasks. This model uses self-supervised learning to generate high-quality dense features and performs excellently in various tasks such as image classification, segmentation, and depth estimation.
DINOv3 is a versatile visual foundation model developed by Meta AI. It can outperform specialized models in a wide range of visual tasks without fine-tuning. This model can generate high-quality dense features and performs excellently in various visual tasks, significantly surpassing previous self-supervised and weakly supervised foundation models.
DINOv3 is a series of general visual foundation models developed by Meta AI. It can outperform specialized advanced models in various visual tasks without fine-tuning. The model adopts the Vision Transformer architecture and is pre-trained on 1.689 billion web images. It can generate high-quality dense features and performs excellently in tasks such as image classification, segmentation, and retrieval.
DINOv3 is a series of general-purpose visual foundation models that can outperform specialized state-of-the-art models on a wide range of visual tasks without fine-tuning. The model uses self-supervised learning to generate high-quality dense features and performs excellently in various visual tasks, significantly surpassing previous self-supervised and weakly supervised foundation models.
DINOv3 is a series of general visual foundation models that can outperform specialized state-of-the-art techniques in a wide range of visual tasks without fine-tuning. The model generates high-quality dense features through self-supervised learning and performs excellently in various visual tasks, significantly surpassing previous self-supervised and weakly supervised foundation models.
MetaCLIP 2 (worldwide) is a multilingual zero-shot image classification model based on the Transformer architecture. It supports visual language understanding tasks globally and can classify images without training.
nvidia
ESM-2 is a protein language model optimized by NVIDIA using the TransformerEngine library based on the original ESM-2 model from Facebook Research. It has the same weights and outputs within the numerical precision range and is suitable for protein sequence analysis tasks.
Anjan9320
This is an ultra-lightweight Hindi speech synthesis model based on the Facebook MMS project. It adopts the VITS architecture and can convert Hindi text into high-quality, natural and fluent speech output. The model is optimized for Hindi and has efficient inference performance.
This is an ultra-lightweight Hindi speech synthesis model based on the Facebook MMS project, specifically optimized for female voices. The model can convert Hindi text into natural and fluent female voices, featuring lightweight and efficient operation, and supports a random duration predictor to generate voices with different rhythms.
MCP server for Facebook page management
This is an MCP server project for searching and analyzing the Facebook public ads library. It can query the advertising placement of enterprises through tools such as Claude, including functions such as the number of ads, types, content analysis, and competitor comparison.
yt-dlp-mcp is an MCP server implementation integrating yt-dlp, providing video and audio content download functions for LLMs and supporting multiple platforms such as YouTube, Facebook, and TikTok.
An integration tool of MCP and Facebook idb for automated iOS device management
An MCP server implementation that provides video transcription functions (such as for YouTube, Facebook, TikTok, etc.) and can be integrated with LLMs.