Elon Musk's xAI launches Grokipedia, an online encyclopedia with 885,000+ articles. After a brief crash due to high traffic, it's now restored. Musk criticizes Wikipedia's bias, emphasizing Grokipedia's fairer approach to information as a key step in understanding the universe.....
AI models fine-tuned with minimal data can generate literary works in famous authors' styles, outperforming human imitators, impacting U.S. copyright and fair use debates.....
AI models conduct real-world cryptocurrency trading tests on the Hyperliquid platform. DeepSeek, Grok, Claude, and other mainstream models each receive $10,000 in initial funds and make autonomous trading decisions under the same instructions. This fair competition aims to test the application of AI in real financial markets.
At the recently concluded China International Fair for the Disabled and China International Rehabilitation Expo, China Mobile's Embodied Intelligence Industry Innovation Center officially launched a product called 'Lingxi' Electronic Guide Dog. The launch of this high-tech product marks an important step forward in intelligent assistive devices, providing a new solution for the mobility of visually impaired people. The 'Lingxi' Electronic Guide Dog uses advanced laser radar and monocular visual three-dimensional reconstruction technology to obtain location information in real time and build high-precision maps. This map can not only be stitched together but also supports complex scenarios.
Use artificial intelligence to create personalized fairy tales for children.
A collection of experimental demos showcasing Meta's latest AI research achievements
A tool for quick and fair random group generation, useful for teachers, trainers, and team leaders.
A materials science model released by the FAIR Chemistry team.
facebook
V-JEPA 2 is a cutting-edge video understanding model developed by the FAIR team under Meta. It extends the pre-training objectives of VJEPA and has industry-leading video understanding capabilities.
Mungert
FairyR1-32B is an efficient large language model developed by Peking University DS-LAB, based on DeepSeek-R1-Distill-Qwen-32B. It achieves a balance between high performance and low-cost inference through an innovative 'distillation-fusion' process.
DevQuasar
FairyR1-32B is a large language model with 32B parameters, developed by PKU-DS-LAB, focusing on text generation tasks.
PKU-DS-LAB
FairyR1-32B is an efficient large language model based on DeepSeek-R1-Distill-Qwen-32B, optimized through distillation and merging processes, excelling in mathematical and programming tasks.
Meta's PyTorch-based pre-trained language model, compliant with FAIR Non-commercial Research License
jhu-clsp
Ettin is the first collection of encoder - only and decoder - only models trained using the same data, architecture, and training method, supporting fair comparison across scales.
Anzhc
A fast race classification model based on YOLOv8/YOLO11 architecture, trained on the FairFace dataset for facial race classification tasks.
Mitsua
Mitsua Likes is a Japanese/English text-to-image latent diffusion model developed based on the concept of collaborative art creation, trained exclusively with explicitly authorized and licensed data, and certified for fair training.
dima806
An image classification model based on Vision Transformer architecture, pre-trained on the ImageNet-21k dataset, suitable for multi-category image classification tasks
An image gender classification model based on the Vision Transformer (ViT) architecture with an accuracy of approximately 93.4%
entai2965
A high-quality Japanese-to-English neural machine translation model developed by MingShiba, optimized based on the fairseq framework and CTranslate2
This model adopts the FAIR Non-commercial Research License and is suitable for non-commercial research purposes, complying with the FAIR Acceptable Use Policy.
SWivid
F5-TTS is a flow matching-based voice synthesis model, focusing on fluent and faithful voice synthesis, especially suitable for scenarios like fairy tale narration.
apple
SAM 2.1 Tiny is a lightweight image and video universal segmentation model introduced by Facebook AI Research (FAIR), supporting controllable visual segmentation based on prompts.
SAM 2.1 Large is a universal segmentation model released by FAIR, suitable for promptable visual segmentation tasks in images and videos.
SAM 2.1 BasePlus is a general-purpose segmentation model released by FAIR in Core ML format, supporting promptable visual segmentation tasks in images and videos.
SAM 2 is a foundational model for promptable visual segmentation in images and videos developed by FAIR, supporting efficient segmentation through prompts.
SAM 2 is a foundational model proposed by FAIR to address promptable visual segmentation in images and videos.
SAM 2 is a foundational model for promptable visual segmentation in images and videos developed by FAIR, supporting universal segmentation tasks through prompts.
n8n is a workflow automation platform for technical teams, combining code flexibility and no - code speed, offering over 400 integrations, native AI capabilities, and fair - code licensing, supporting self - hosting or cloud deployment.