## Sources

1. [Fermi CEO and CFO Exit - $20B Nuclear AI Bet Implodes](https://awesomeagents.ai/news/fermi-nuclear-ai-crash-ceo-cfo-depart/)
2. [Amazon Bets $25B on Anthropic and 5GW of Trainium](https://awesomeagents.ai/news/amazon-25b-anthropic-trainium-100b-aws/)
3. [Distillation Leaks, Weak Agents, and Research Sabotage](https://awesomeagents.ai/science/distillation-leaks-weak-agents-research-sabotage/)
4. [Kimi K2.6 - Open Weights, 300 Agents, Top Coding Score](https://awesomeagents.ai/news/kimi-k2-6-agent-swarm-open-weight/)
5. [NVIDIA Lyra 2.0 - Explorable 3D Worlds from One Photo](https://awesomeagents.ai/news/nvidia-lyra-2-explorable-3d-worlds/)
6. [Claude Opus 4.7 Review: Coding Giant, Mixed Signals](https://awesomeagents.ai/reviews/review-claude-opus-4-7/)
7. [Lovable Users Report Leak of Chats, Code, Credentials](https://awesomeagents.ai/news/lovable-breach-chat-source-code-credentials/)
8. [Factory Raises $150M to Scale Enterprise AI Droids](https://awesomeagents.ai/news/factory-ai-150m-series-c-droids/)
9. [GitHub Bans Engineer Who Shipped 500 Agent PRs in 72 Hours](https://awesomeagents.ai/news/github-bans-500-agent-prs-72-hours/)
10. [Tesla Hid Thousands of Fatal Autopilot Incidents, RTS Says](https://awesomeagents.ai/news/tesla-hid-thousands-autopilot-incidents/)

---

### Amazon Bets $25B on Anthropic and 5GW of Trainium by Sophie Zhang

*   **Massive Financial Bet:** Amazon announced a $25 billion investment into Anthropic, structured as $5 billion in immediate cash with $20 billion unlocking upon meeting specific commercial milestones [1-3]. In return, Anthropic committed to spending more than $100 billion on AWS infrastructure over the next decade [1, 2].
*   **Infrastructure Land-Grab:** The deal secures up to 5 gigawatts of Trainium compute capacity for Anthropic, cementing Amazon's custom silicon as the primary backbone for Claude's training and inference workloads [2, 4]. Trainium3 delivers 4.4 times the compute performance of its predecessor and cuts server costs by up to 50% [5].
*   **Enterprise Integration:** The agreement unifies the customer experience, allowing AWS clients to provision Claude Platform directly through existing AWS accounts, removing the need for a separate vendor relationship with Anthropic [2, 6, 7]. 
*   **Strategic Drawbacks:** Despite the massive scale, Amazon's Trainium still trails NVIDIA's raw compute power for the largest frontier training runs [8, 9]. Furthermore, Amazon's concurrent $50 billion commitment to OpenAI reveals this is less of a principled tech strategy and more of an infrastructure land-grab to ensure AWS profits regardless of which AI lab wins [10].

### Claude Opus 4.7 Review: Coding Giant, Mixed Signals by Elena Marchetti

*   **Top Coding Performance:** Claude Opus 4.7 is Anthropic's strongest available coding and agent model, claiming leading benchmark scores on SWE-bench Pro (64.3%), MCP-Atlas (77.3%), and CursorBench (70%) [11-13].
*   **Upgraded Vision Capabilities:** The model supports images up to 2,576 pixels (3.75 megapixels), producing a massive 22-point jump in visual navigation benchmarks, making it highly effective for UI review and document analysis [14, 15]. 
*   **Cost and Quality Regressions:** While the official $5/$25 rate card remains the same, a new tokenizer artificially inflates API costs by 10-35% due to higher token generation for identical inputs [11, 16, 17]. The model also regressed nearly 5 points on the BrowseComp web research benchmark and suffers from degraded, mechanical prose quality in long-form writing tasks [11, 18-20].
*   **API Control Enhancements:** Opus 4.7 introduced "Task Budgets" to bound long-running sessions, an intermediate "xhigh" effort level for fine-tuning reasoning, and improved cross-session memory [15, 21, 22]. However, it completely removed visible reasoning traces, which breaks existing debugging pipelines [19, 20].

### Distillation Leaks, Weak Agents, and Research Sabotage by Elena Marchetti

*   **Subliminal Transfer of Biases:** A study by Dang et al. found that using distillation to compress models silently transfers unsafe behaviors from teacher to student models [23, 24]. Stripping unsafe keywords from the training data is ineffective because the bias is structurally encoded in the decision sequences (trajectory dynamics), leading to student deletion rates of up to 100% [25-27].
*   **Weak-Link Optimization (WORC):** Bian et al. established that multi-agent pipelines fail at their weakest point, meaning individual errors compound sequentially [28, 29]. By automatically identifying the bottlenecking sub-agent and routing extra compute to it, the WORC framework pushed reasoning accuracy to 82.2% [29, 30].
*   **ASMR-Bench Sabotage Detection:** Redwood Research released a benchmark showing that frontier models struggle to detect deliberately sabotaged machine learning code [31, 32]. The best model, Gemini 3.1 Pro, achieved an AUROC of 0.77 (meaning it fails 25% of the time), with simple "omissions" of key data being the hardest sabotage tactic to catch [32-34]. 

### Factory Raises $150M to Scale Enterprise AI Droids by Sophie Zhang

*   **High-Value Funding:** Factory closed a $150 million Series C funding round led by Khosla Ventures, reaching a $1.5 billion valuation following six straight months of month-over-month revenue doubling [35-37]. 
*   **Full SDLC Automation:** Unlike standard coding assistants, Factory builds autonomous "Droids" that handle the entire software development lifecycle, including tedious tasks like testing, code review, documentation updates, and migrations [38, 39].
*   **Multi-Agent Missions:** The platform organizes complex objectives into "Missions," breaking work into subtasks for individual Droids that maintain their own persistent contexts on Factory's macOS and Windows desktop app [40, 41].
*   **Model Agnosticism and Limitations:** The system automatically routes between models like Claude and DeepSeek based on cost and capability [42]. However, Factory's own Legacy-Bench results show that frontier models universally fail at modernizing archaic infrastructure, leaving 31 COBOL tasks completely unsolvable [43-45].

### Fermi CEO and CFO Exit - $20B Nuclear AI Bet Implodes by Daniel Okafor

*   **Executive Collapse:** Fermi America's CEO Toby Neugebauer and CFO Miles Everson departed in the same week, following an 83% stock collapse that dragged the company's valuation from $20 billion down to $3.4 billion [46, 47]. 
*   **Zero Revenue and Stalled Construction:** Six months after its Nasdaq IPO, the nuclear-powered data center startup has no revenue, no confirmed anchor tenant, and active construction at its 5,800-acre Texas site has stalled [46, 48, 49]. 
*   **Loss of Trust:** Neugebauer's confrontational behavior with hyperscalers and his prior history with the bankrupt startup GloriFi heavily hampered deal negotiations and eroded investor trust [50]. 
*   **Lingering Market Potential:** Despite the corporate unraveling, a bull case exists because the core problem remains: data center power demand is surging, and hyperscalers desperately need the gigawatt-scale power Fermi proposed [51, 52]. 

### GitHub Bans Engineer Who Shipped 500 Agent PRs in 72 Hours by Sophie Zhang

*   **Massive Agent Run:** Korean CTO Junghwan Na deployed a 13-step agent harness that submitted over 500 commits and 130 pull requests across 100 major open-source repositories in just 72 hours [53, 54]. 
*   **Pipeline Ingenuity:** His agent bypassed standard "AI slop" filters by using a hard gate for local bug reproduction and analyzing the 10 most recently merged PRs to perfectly mimic the required coding styles [55, 56]. The PRs were good enough to be accepted by maintainers at Kubernetes, Hugging Face, and Ollama [54].
*   **Platform Ban:** Despite the high quality, GitHub suspended Na's account for spam because the platform's abuse detection heuristics trigger based on velocity, unable to distinguish a disciplined agent harness from a malicious bot [54, 57, 58].
*   **Labor Scarcity Shift:** The incident proves that finding fixes and writing PRs are now abundant commodities; the true scarce resource in open-source software is human attestation—the act of a developer putting their name and identity on an approval or CLA [59, 60].

### Kimi K2.6 - Open Weights, 300 Agents, Top Coding Score by Sophie Zhang

*   **Open-Weight Champion:** Moonshot AI released Kimi K2.6, a 1-trillion parameter MoE model (32B active parameters) under a Modified MIT license [61, 62]. It secured a 58.6% on SWE-Bench Pro, claiming the highest score among open models and beating GPT-5.4 [62, 63].
*   **Massive Agent Swarms:** The platform dramatically expanded its swarm capabilities, jumping from 100 to 300 independent sub-agents capable of executing 4,000 continuous tool-call steps [62, 64, 65]. 
*   **Human-Agent Handoffs:** K2.6 introduces "Claw Groups," allowing developers to take over specific subtasks mid-execution and hand them back to the agent without killing the entire job [66].
*   **Deployment Hurdles:** While vision-enabled and highly capable, running a 1T MoE model requires massive hardware (like an H100 cluster), meaning practical use for most developers will rely on API access [67, 68]. Furthermore, commercial users with >100M monthly active users must credit the model in their UI [69].

### Lovable Users Report Leak of Chats, Code, Credentials by Elena Marchetti

*   **Critical Security Flaw:** Developer Morgan Linton exposed that a free Lovable account can be used to extract the AI chat histories, source code, and database credentials of other Lovable users [70, 71].
*   **Missing Row Level Security:** The vulnerability stems from the platform's AI generating Supabase tables without enabling Row Level Security (RLS) by default [72]. Because the "anon key" is public, any client can read unprotected tables [72].
*   **Legacy Blast Radius:** While newer projects were patched, apps created before November 2025 remain fundamentally exposed [71, 73]. This marks the fourth time in a year the company has been warned about this exact defect, previously identified as CVE-2025-48757 [71, 73, 74]. 
*   **Mitigation Steps:** Developers affected must manually query their Supabase dashboard, assume API keys in chat histories are compromised, rotate their anon keys, and manually write RLS policies to secure their apps [75].

### NVIDIA Lyra 2.0 - Explorable 3D Worlds from One Photo by Sophie Zhang

*   **1Photo to 3D Pipeline:** NVIDIA's Spatial Intelligence Lab launched Lyra 2.0, a 14B model that converts a single photograph into a fully navigable 3D environment and surface mesh [76, 77].
*   **Two-Stage Architecture:** The tool first uses Wán 2.1-14B to autoregressively generate camera-controlled video from the image, and then utilizes Depth Anything V3 to lift that video into 3D Gaussian splats [78, 79]. 
*   **Technical Solutions:** The model solves "spatial forgetting" by maintaining dense 3D correspondences as a spatial index, and fixes "temporal drifting" by using self-augmentation training to force the model to learn drift correction [79, 80]. 
*   **Severe Deployment Restrictions:** The code is open, but the model weights are strictly restricted under an Internal Scientific Research and Development License, explicitly prohibiting any commercial or production use [77, 81]. Additionally, it cannot render dynamic moving objects and requires roughly 80GB of VRAM (a minimum of one H100 GPU) to run [82].

### Tesla Hid Thousands of Fatal Autopilot Incidents, RTS Says by Daniel Okafor

*   **Damaging Exposure:** A primetime investigation by Swiss broadcaster RTS synthesized previous data leaks with modern court rulings to expose that Tesla concealed thousands of fatal incidents related to its Autopilot feature [83-85]. 
*   **The Internal Log:** Drawing from the 2023 "Tesla Files" leak, the report confirmed over 2,400 customer complaints of spontaneous acceleration and more than 1,000 accidents tied directly to Autopilot that Tesla kept buried [85, 86]. 
*   **Miami Court Verdict:** The piece hinges on a $243 million federal verdict in Miami [84, 87]. During the trial, plaintiffs recovered crash-data logs that Tesla falsely claimed were "corrupted," proving the Autopilot system saw an obstacle and failed to brake, issuing a warning only upon impact [88, 89]. 
*   **Regulatory Backlash:** Tesla admitted to the NHTSA that "data and labeling limitations" led to an under-reporting of crashes, leading the agency to upgrade its investigation into an engineering analysis while the DOJ continues a parallel fraud probe [90, 91]