Home
Projects
Blog
Contact
Books
AI News
← Back to AI News

Dec 29 "Meta Superintelligence Labs acquires Manus AI for ~$4B, at $100M ARR, 9months after launch" Show details

news.smol.ai•about 2 months ago•View Original →

TL;DR: Meta Superintelligence Labs acquires Manus AI for ~$4B at $100M ARR, 9 months after launch

Major Highlights:

  • Meta buys Manus AI for ~$4B after hypergrowth to $100M ARR: Manus launched in March, raised at a $500M valuation from Benchmark in April, and hit $100M ARR by Dec 17. Over a 10-day sprint through Christmas, Alex Wang (with apps lead Nat Friedman) negotiated the sale to Meta Superintelligence Labs for an estimated $4B. For context, comparable fast-growth B2B startups fetch ~40–50x revenue; Manus was reportedly the “cheapest” among AI B2C category leaders.
  • Infra shakeups: vLLM “front door,” FP8 not a free win on MI300X, Weaviate goes “ops real”: vLLM launched vllm.ai for installs, docs, and office hours while acknowledging doc gaps. Across both vLLM and sglang, AMD MI300X FP8 underperforms bf16 for MiniMax‑M2.1 (e.g., ~42 TPS FP8 vs ~55.7 TPS bf16 on vLLM; ~55 vs ~71 TPS on sglang). Weaviate shipped Object TTL, Java v6 client GA, Flat Index 1‑bit RQ quantization GA, zstd backups, and multimodal doc embeddings.
  • Open-weight momentum: GLM‑4.7, MiniMax‑M2.1, FLUX.2 Turbo, new 32B VLM: GLM‑4.7 is emerging as a default open coding model (top on Artificial Analysis; Baseten sees ~20% speed gains, improved TTFT). MiniMax‑M2.1 tops open models on Code Arena WebDev and posts strong tool-use metrics (82.83% tool call rate; 95.12% accuracy). fal open-sourced FLUX.2 [dev] Turbo with sub-second image gen and top open-source ELO. A Korean 32B VLM reports strong EN+KR scores with notable architecture/training changes.
  • Agents go production: Spotify’s playbook, docs for agents, and workflow patterns: Spotify automates large-scale code migrations with background agents: verify end states, minimal tool surface (verify/git/bash), and AGENTS.md. Broader pattern: dual-audience docs (human + agent), CLI-first tasks, heavy queueing, minimal branching, and explicit config for reasoning/tool limits.

Key Technical Details:

  • Manus metrics: ~$4B price; $100M ARR in ~9 months; prior $500M valuation (April). Negotiation window ~10 days over holidays.
  • MI300X precision: MiniMax‑M2.1 on vLLM FP8 ~42 TPS vs bf16 ~55.7; on sglang FP8 ~55 vs bf16 ~71 TPS after patching.
  • Weaviate: Object TTL; Java v6 GA; Flat Index RQ Quantization (1‑bit) GA; zstd backups; multimodal page-image embeddings for text queries.
  • GLM‑4.7: top open-weight reliability/coding; internal adoption reported; ~20% faster on Baseten (token/s and TTFT).
  • MiniMax‑M2.1: Code Arena WebDev #1 open; ties GLM‑4.7 at 1445 overall; provider verifier: 82.83% tool use, 95.12% tool accuracy, 100% query success/quality.
  • FLUX.2 [dev] Turbo: distilled DMD2-style, sub-second image generation; top ELO among open-source image models (Artificial Analysis).
  • Context retention: ByteDance Seed 1.6/Flash added to Context Arena MRCR vs OpenAI o3/o4‑mini and budget-tier models at 128k context.

Community Response/Impact:

  • Consolidation accelerates: A $4B exit for a B2C agent leader validates consumer agent PMF and raises the bar for independents.
  • Infra reality checks: FP8 on MI300X underperforming bf16 tempers “free speed” narratives; teams are re-evaluating precision configs.
  • API fragmentation pain: Growing calls for a unified wrapper over diverging provider SDKs to curb multi-model integration cost.
  • Agents-as-ops: AGENTS.md/CLAUDE.md patterns and CLI-first workflows become norms as orgs productionize coding agents.

First Principles Analysis:

  • The Manus deal signals distribution and integration are king in consumer agents: with $100M ARR velocity, Meta can compound value via platform reach, on-device integration, and cross-product embedding—justifying growth multiples. On the technical front, MI300X FP8 results highlight that numeric format gains depend on kernels, calibration, and model-specific characteristics; “precision ≠ performance” without end-to-end optimization. Operationally, reliable agents require verifiability—clear end states, small tool surfaces, and structured docs—shifting documentation and CI from human-first to human+agent co-design.