Y Combinator's 2024 star startup K-Scale Labs shuts down, unable to deliver DevKit desktop humanoid robots. CEO Ben Bolt announced refunds and liquidation with $400K remaining cash. The company, founded in 2024, had raised $4M seed funding at a $50M valuation.....
Runlayer raised $11M seed funding from Khosla Ventures and Felicis. After 4 months in stealth, it secured 8 enterprise clients like Gusto and Instacart. Its platform integrates gateway, threat detection, and observability into a single console for MCP security. Founded by Nanit and Vowel creators, with MCP spec author as advisor.....
Bezos joins Project Prometheus as co-CEO. The secretive AI startup raised $6.2B in seed funding, partly from Bezos, focusing on physical AI with a 100-person team from top firms like OpenAI.....
Luminal raised $5.3M in seed funding led by Felicis Ventures. Founded by ex-Intel chip designers, it optimizes computing resources to enhance infrastructure efficiency and improve software usability for developers.....
Seedance 2.0 can transform images and text into professional-quality, cinematic AI videos.
High-speed AI image generation and editing tool
Create stunning AI art with Seedream 4.0.
Seedream4 is a 2K image generator with revolutionary AI technology, featuring an ultra - fast generation speed of 1.8 seconds.
magiccodingman
This is an experimental mixed quantization model that uses the MXFP4_MOE mixed weight technology. While maintaining accuracy close to Q8, it achieves a smaller file size and higher inference speed. The model explores the combination of MXFP4 with high-precision embedding/output weights and achieves near-lossless optimization of accuracy on dense models.
catalystsec
This is a lightweight version of the ByteDance Seed-OSS-36B-Instruct model quantized to 4 bits using DWQ. It is distilled from the BF16 teacher model using mlx-lm 0.27.1 and supports Chinese-English bilingual text generation tasks.
giladgd
This is a static quantization version of the ByteDance-Seed/Seed-OSS-36B-Instruct model, providing GGUF format files with multiple quantization levels to help developers use the model more efficiently under different hardware configurations.
lmstudio-community
Seed-OSS-36B-Instruct is a large language model developed by ByteDance-Seed, with 36 billion parameters and released under the Apache 2.0 open-source license. This model is built on the transformers library, supports vllm and mlx technology optimization, and has undergone 8-bit quantization specifically for Apple Silicon chips, providing efficient text generation capabilities.
Seed-OSS-36B-Instruct is a large language model with 36 billion parameters developed by the ByteDance Seed team. It is built on the Transformer architecture, processed with MLX quantization, and specifically optimized for Apple Silicon chips. It can run efficiently in LM Studio.
bartowski
This is a quantized version of the Seed-OSS-36B-Instruct model from ByteDance-Seed. It undergoes multi-precision quantization processing using the llama.cpp tool, offering more than 20 quantization options from BF16 to IQ2_XXS, aiming to enhance the model's operating efficiency and performance on different hardware.
gabriellarson
Seed-OSS is an open-source large language model series developed by the ByteDance Seed team. It has powerful long context processing, reasoning, and agent interaction capabilities. Trained with only 12T tokens, it performs excellently in multiple public benchmark tests and supports native long context processing of up to 512K.
RDson
Seed OSS 36B Instruct is a large-scale language model developed by ByteDance, with 36 billion parameters, specifically optimized for instruction-following tasks. It is built on the llama.cpp framework and supports efficient text generation capabilities.
yarikdevcom
Seed-OSS-36B-Instruct is a large language model with 36 billion parameters developed by ByteDance and is open-sourced under the Apache-2.0 license. This model is specifically optimized for instruction following tasks, supporting text generation and dialogue functions, and has powerful understanding and generation capabilities.
dnakov
Seed-OSS-36B-Instruct is a text generation model developed by ByteDance. It is based on a large language model architecture with a 36B parameter scale and is specifically optimized for instruction-following tasks. This model supports both English and Chinese and uses the Apache-2.0 open-source license. It can be efficiently deployed through the vllm and mlx inference frameworks.
Seed-OSS-36B-Instruct is a large-scale language model developed by ByteDance, with 36 billion parameters, focusing on text generation tasks. This model is implemented based on the MLX framework, supports both English and Chinese, and has powerful instruction following and text generation capabilities.
Seed-OSS-36B-Instruct is a large language model with 36 billion parameters developed by ByteDance. It is optimized based on the MLX framework and focuses on text generation tasks. This model supports both English and Chinese, and uses the Apache 2.0 open-source license. It has powerful instruction-following and content generation capabilities.
QuantTrio
Seed-OSS-36B-Instruct-AWQ is a quantized version of the 36B parameter large language model developed by the ByteDance Seed team. It has powerful long context processing capabilities, reasoning capabilities, and agent functions, and supports a context length of up to 512K and flexible thinking budget control.
ByteDance-Seed
Seed-OSS is an open-source large language model series developed by the ByteDance Seed team. It has powerful long context processing, reasoning, agent interaction capabilities, and general performance. This model was trained with only 12T tokens and performed excellently in multiple public benchmark tests.
prithivMLmods
cudaLLM-8B is a professional language model developed by ByteDance Seed, specifically designed to generate high-performance and syntactically correct CUDA kernel code. Built on the Qwen3-8B base model, it is trained through two stages of supervised fine-tuning and reinforcement learning, enabling it to assist developers in writing efficient GPU parallel programming code.
Seed-X-RM-7B is a reward model in the Seed-X series, specifically designed to evaluate translation quality. Based on the 7-billion-parameter Mistral architecture, this model can assign reward scores to multilingual translations and supports the evaluation of translation quality among 25 languages.
Seed-X-Instruct-7B is a powerful open-source multilingual translation language model that breaks the boundaries of translation ability within the range of 7 billion parameters. It has excellent translation performance, a lightweight architecture, and wide domain coverage, providing strong support for translation research and applications.
Seed-X-PPO-7B is a powerful open-source multilingual translation language model trained with reinforcement learning, focusing on providing high-quality translation services.
ai9stars
AutoTriton is a Triton programming model with 8 billion parameters. It is based on the Seed-Coder-8B-Reasoning model and trained through supervised fine-tuning and reinforcement learning. It is the first model driven by reinforcement learning and specifically designed for Triton programming. It can automatically optimize complex kernel development tasks such as computing units, memory management, and parallelism.
Mungert
Seed-Coder-8B-Reasoning is a code generation model based on the Transformer architecture, with powerful reasoning capabilities and suitable for a variety of coding tasks. It performs excellently among open-source models of the same scale.
The MCP service of ComfyUI provides image generation and prompt optimization functions, supporting automatic size adjustment and random seed generation.
An MCP server based on the Amazon Bedrock Nova Canvas model, providing high-quality AI image generation services, supporting functions such as text description image generation, negative prompt optimization, size configuration, and seed control.
A Doubao Seedream 4.0 text - to - image server based on the MCP protocol. It supports AI image generation, automatic download, and local storage, and can be integrated into Claude Code for use.