Home
Projects
Blog
Contact
Books
AI News
← Back to AI News

Feb 06 not much happened today Show details

news.smol.ai•19 days ago•View Original →

TL;DR: Feb 06 — A/B tests crown new coding leaders; agent “teams” mature; evals get provenance; world models hit production

Major Highlights:

  • GPT-5.3-Codex vs Claude Opus 4.6: clear upgrades with distinct strengths
    • Broad A/B testing across developers characterizes Codex as detail-obsessed and strong on scoped coding tasks; Opus 4.6 feels more ergonomic for exploratory work and planning. Reports highlight Codex’s “auto compaction”/context garbage collection and frequent progress updates as UX wins for long jobs. In a concrete “AI engineer-in-the-loop” benchmark (Karpathy’s nanochat GPT-2 speedrun), Opus 4.6 delivered measurable wall-clock speedups via torch.compile, optimizer, and memory tweaks; Codex-5.3-xhigh proposed strong ideas but sometimes regressed quality, potentially during “0% context” episodes.
  • Agent swarms evolve toward software orgs in a box
    • Parallel-agent systems are converging on org-design patterns: task assignment, file locking, QA, and git-based synchronization. Anthropic-style “agent teams” coordinating edits and state are cited as a practical step-change. Reliability themes center on robust traces, sandboxes, and type-safe state (LangChain/LangSmith improvements; deepagents adding sandboxes like daytona/deno/modal/node VFS). The “Recursive Language Model” framing argues agents should be REPL-native, storing context in variables and exchanging structured values to combat prompt/context rot.
  • Benchmarking trust: provenance over perfection
    • Hugging Face launches Community Evals with PR-based submissions, YAML-stored results in model repos, Inspect AI reproducibility badges—prioritizing transparent provenance amid contamination/saturation concerns. Counterpoint: difficult evals still have headroom (SWE-bench Multilingual <80%, SciCode 56%, CritPt 12%, VideoGameBench 1%). Opus 4.6 posts big jumps on math and arenas; Epoch reports Tier 4 at 21% (10/48), roughly tied with GPT-5.2 xhigh at 19%, behind GPT-5.2 Pro at 31%. StepFun details infra for reproducible scoring and urges “evaluation should slightly lead training.”
  • World models in production: Waymo + DeepMind’s Genie 3
    • Waymo unveils generative simulation using Genie 3 to create hyper-real, promptable “what-if” scenarios mapped to Waymo’s camera + 3D lidar stack, enabling rare/impossible event training (tornadoes, plane landings) before real-world exposure.

Key Technical Details:

  • Coverage: 12 subreddits, 544 Twitters, 24 Discords (254 channels, 8,727 msgs); ~666 minutes reading saved at 200 wpm.
  • Codex product strategy: claims of no public GPT-5.3-Codex API; Sam Altman solicits pricing input—suggesting product-first go-to-market and harder third-party benchmarking.
  • Agent infra: LangSmith trace previews and voice-agent debugging; deepagents sandbox backends; emphasis on state control and evals.
  • Benchmarks: Opus 4.6 rises on Arena/frontier math; chess/complex reasoning still uneven.

Community Response/Impact:

  • Practitioners report real productivity gains but echo @karpathy’s caution: current agents can chase spurious 1% wins, miss validation, violate repo styles, and misread result tables—useful with oversight, not autonomous yet.
  • Growing interest in “software teams in a box,” but tooling strain (git/package managers) and coordination costs are pain points.
  • Broad support for transparent, decentralized evals; debate persists on benchmark saturation vs unsolved fronts.

First Principles Analysis:

  • The shift from prompt-centric “LLM + tools” to RLM/REPL-native agents formalizes state and interfaces, reducing context drift and enabling composability—key for scaling reliability.
  • Eval provenance is the pragmatic fix: we can’t fully prevent contamination, but we can make claims auditable and reproducible.
  • World models close the sim-to-real gap by aligning generative priors with sensor modalities, unlocking safer, rarer-edge-case training at scale.
  • A product-first Codex (limited API) hints at controlled UX, stronger moat via integration, and curated eval narratives—at the cost of open benchmarking.