117B-parameter MoE with 5.1B active per token. Configurable reasoning effort, full chain-of-thought, and native tool use. Apache 2.0.
671B-parameter MoE hybrid reasoning model with 37B active per token. Hybrid thinking mode, Multi-Latent Attention, 160 routed experts.
685B-parameter MoE with 256 routed experts, 8 active per token (37B active). DeepSeek Sparse Attention, integrated reasoning and tool-use.
80B total with only 3B active per token using high-sparsity MoE. Hybrid Attention (Gated DeltaNet + Gated Attention), Multi-Token Prediction.
235B total with 128 experts, 8 active per token (22B active). State-of-the-art reasoning with extended thinking mode. 256K context.
235B MoE model optimized for general-purpose text generation without thinking blocks. Enhanced instruction following, 256K context.
480B-parameter MoE with 160 experts, 8 active per token (35B active). SOTA agentic coding, repo-scale understanding, Fill-in-the-Middle.
30B-parameter dense vision-language model with dedicated thinking capability. Vision-language reasoning with chain-of-thought.
30B-parameter dense vision-language model optimized for direct instruction following. Broad visual tasks, OCR, and image understanding.
Compact MoE with 30B total and 3B active per token. Dual mode (thinking/non-thinking), 100+ languages.
8B dense model optimized for low-latency responses and lightweight deployment. Dual mode, 100+ languages.
Native multimodal foundation model with hybrid Gated Delta Networks and sparse MoE (512 experts). 397B total / 17B active. 201 languages.
230B-parameter MoE with 10B active per token. Lightning Attention, 80.2% SWE-Bench Verified, trained on 200K+ real-world environments.
1T-parameter native multimodal agentic model with 32B active per token. Agent Swarm (100 sub-agents, 1,500 tool calls). Modified MIT.
Lightweight 8B dense transformer with Grouped Query Attention. 128K context, 8 languages. Llama Community License.
Balanced 70B dense model with 405B-level performance. Grouped Query Attention, 128K context, 8 languages.
Efficient MoE with 16 experts, 1 active per token (109B total / 17B active). 10M token context, native multimodal, early fusion.
Large-scale MoE with 128 routed experts (400B total / 17B active). 1M token context, native multimodal.
Fast text-to-image generation powered by Eigen AI. Transform text prompts into high-quality images.
Advanced image editing with auto pipeline selection - supports lightning and standard modes
Image to video generation with prompt-guided motion
Convert speech to text using advanced Whisper AI. Record audio or upload files for transcription.
Transform text into natural-sounding speech with multiple voice options and styles.
Create expressive text-to-speech with voice cloning. Upload a reference sample or use the default Eigen AI voice.
Advanced text-to-speech with multilingual support and voice cloning. Supports named speakers and custom voice cloning via uploaded reference audio.