May 04, 2026 09:21 AM
Anthropic appears to have started a fresh round of red teaming on a new internal build. The company is set to host its Code with Claude developer conference in San Francisco on May 6. The timing suggests that the model is being hardened ahead of an announcement timed to the event. The red team round is consistent with the company's responsible scaling policy, which calls for jailbreak probes and constitutional classifier stress tests before any frontier-class deployment.
Read MoreMay 04, 2026 09:21 AM
Google is testing a new Omni model for video generation, potentially unifying its video and image-generation tools. The Omni model appears in Gemini's video generation UI, suggesting it might become a public product name. A launch during Google I/O 2026 is possible amidst increasing AI video competition.
Read MoreMay 04, 2026 09:21 AM
OpenAI updated Codex with animated Pets, which appear as overlays on the screen and interact via short message bubbles. Codex also now auto-imports configuration files from other coding agents and features a new dictation dictionary to improve voice input accuracy. These updates aim to enhance Codex's usability and appeal as a comprehensive desktop application.
Read MoreMay 04, 2026 09:21 AM
DeepSeek's latest preview models, DeepSeek-V4-Pro and DeepSeek-V4-Flash, are both one-million-token Mixture of Experts models. Pro has a total of 1.6 trillion parameters, with 49 billion active, while Flash has 284 billion total, with 13 billion active. DeepSeek-V4-Pro is now the largest open weights model. It is also a very cheap model to run.
Read MoreMay 04, 2026 09:21 AM
This post looks at how pricing compares across coding plans and APIs. Codex is heavily subsidized compared to others, but most others are still subsidized. Claude Pro costs around 10 times more per token than the rest.
Read MoreMay 04, 2026 09:21 AM
Large language models are set to be one of the largest computing infrastructure projects ever. This post is the first part of a series about LLM architecture and its implications for reasoning. It looks at why the transformer architecture was so impactful for LLMs.
Read MoreMay 04, 2026 09:21 AM
Perplexity emphasizes modular Agent Skills for enhancing its frontier agent products, with specific designs and hierarchies to ensure high-quality user experiences. Unlike traditional software, Skill development prioritizes detailed, context-specific design principles where real queries and evaluations shape their necessity and content. Maintaining these Skills involves constant iteration, testing across multiple models, and prioritizing efficiency and simplicity due to the inherent 'cost' each Skill introduces.
Read MoreMay 04, 2026 09:21 AM
AutoRound is an advanced quantization toolkit designed for large language models and vision-language models. It achieves high accuracy at ultra-low bit widths with minimal tuning. AutoRound seamlessly works with Transformers, vLLM, SGLang, and more. It can quantize 7B models in 10 minutes on a single GPU.
Read MoreMay 04, 2026 09:21 AM
A scalable method generates realistic virtual computer environments and long-horizon simulations, producing rich training signals that improve agent performance across productivity tasks.
Read MoreMay 04, 2026 09:21 AM
Edit-R1 introduced a chain-of-thought reward model that evaluates image edits through structured reasoning, improving alignment and performance in text-guided editing tasks.
Read MoreMay 04, 2026 09:21 AM
Replit's Amjad Masad highlights strong growth, nearing a billion-dollar run rate, and boasts a 300% net revenue retention rate. Unlike Cursor, which struggles with negative margins, Replit maintains gross margin positivity and appeals to non-technical users with its secure, end-to-end platform. While Masad remains committed to Replit's independence, he acknowledges open discussions with potential acquirers and expresses frustration with Apple's alleged discriminatory App Store practices, suggesting possible legal action.
Read MoreMay 04, 2026 09:21 AM
This post walks through the inference pipeline from tokenization and embeddings through stacked self-attention layers, then splits generation into two distinct phases on the same GPU: compute-bound prefill that processes all input tokens in parallel and memory-bound decode that emits one token at a time.
Read MoreMay 04, 2026 09:21 AM
Mode collapse occurs when models repeatedly generate the most common outputs, leading to homogenous results, exemplified by AI generating more dogs over cats with unbalanced training data. It similarly impacts various domains like grant-making and music, as systems become increasingly specialized over time based on prior outputs and successes. To counteract, introduce variability or change external pressures to diversify and prevent over-specialization.
Read MoreMay 04, 2026 09:21 AM
Comparing open-source models to closed APIs is flawed, as they serve different purposes.
Read MoreMay 04, 2026 09:21 AM
Many of the companies have said their deals with the Department of War include commitments that their tools wouldn't be used for mass surveillance or autonomous weapons.
Read MoreMay 04, 2026 09:21 AM
The investors want to create a company that helps teach businesses how to incorporate AI across their operations.
Read MoreMay 04, 2026 09:21 AM
One global vLLM pool is a poor default for mixed traffic.
Read More