## Sources

1. [Anthropic Puts $100M Behind Claude Certification Program](https://awesomeagents.ai/news/anthropic-claude-certified-architect-partner-network/)
2. [CEO Asked ChatGPT How to Dodge $250M Bonus - Lost in Court](https://awesomeagents.ai/news/krafton-chatgpt-250m-bonus-court-ruling/)
3. [Inside Amazon's Trainium Lab - How It Beat NVIDIA](https://awesomeagents.ai/news/amazon-trainium-chip-lab-openai-anthropic/)
4. [Nemotron-Cascade 2: 30B Open MoE, One GPU, Beats 120B](https://awesomeagents.ai/news/nvidia-nemotron-cascade-2-open-moe-30b/)
5. [Leanstral Outperforms Claude Sonnet at Formal Code Proofs](https://awesomeagents.ai/news/leanstral-mistral-lean4-proof-agent/)
6. [WordPress.com Opens Write Access to AI Agents via MCP](https://awesomeagents.ai/news/wordpress-com-mcp-ai-agents-write-publish/)

---

### Anthropic Puts $100M Behind Claude Certification Program | Awesome Agents by Daniel Okafor

*   **Anthropic has introduced the Claude Certified Architect - Foundations (CCA-F) exam, aimed at testing production architecture skills rather than basic chatbot prompting** [1, 2]. 
*   **The certification heavily emphasizes agentic systems**, with the top-weighted domain being agentic architecture and orchestration (27%), covering multi-agent systems and task decomposition [1-3].
*   The 120-minute, 60-question exam costs $99 per attempt, though it is free for the first 5,000 partner employees [1, 4]. Notably, **there are currently no retake options available**, which differs from other major cloud certifications [4, 5].
*   Alongside the exam, **Anthropic is investing $100 million into its Claude Partner Network** to provide training, sales enablement, embedded engineers, and co-marketing [1, 4, 6]. 
*   Major consulting firms are heavily investing in this ecosystem, with **Accenture training 30,000 professionals on Claude and Cognizant providing Claude access to 350,000 associates** [1, 7].
*   Anthropic provides four free courses via Anthropic Academy to help candidates prepare for the credential [2].
*   **Critics argue the program is a strategic move to create vendor lock-in**, similar to what AWS, GCP, and Azure did, structurally binding enterprise consulting pipelines to Anthropic's ecosystem [4, 5, 8].

### CEO Asked ChatGPT How to Dodge $250M Bonus - Lost in Court | Awesome Agents by Daniel Okafor

*   **Krafton CEO Changhan Kim utilized ChatGPT to craft a corporate takeover strategy to avoid paying a $250 million earn-out bonus** to the creators of Subnautica following a $500 million acquisition [9, 10].
*   When ChatGPT initially stated the contract would be "difficult to cancel," **Kim continued prompting until the AI generated "Project X,"** a strategy that included seizing publishing rights, taking control of source code, and framing the financial dispute as a concern over "fan trust" and "quality" [11, 12].
*   Despite explicit warnings from his head of corporate development that the AI's plan would trigger lawsuits and reputational damage, **Kim ignored human advice and executed the AI-generated strategy**, firing three independent executives without legitimate cause [12-14].
*   **The gaming community quickly identified the resulting public statements as AI-generated PR** [13, 15].
*   **Delaware Vice Chancellor Lori Will ruled against Krafton, explicitly citing the CEO's use of ChatGPT in her ruling** [13, 15]. The judge emphasized that executives must rely on independent human judgment instead of delegating critical business decisions to an AI [16].
*   The court reversed Krafton's actions by reinstating the fired CEO (Ted Gill), extending the bonus deadline by 258 days, and prohibiting Krafton from interfering with the game's release schedule [15].

### Inside Amazon's Trainium Lab - How It Beat NVIDIA | Awesome Agents by Elena Marchetti

*   **Amazon is building a credible alternative to NVIDIA hardware with its custom Trainium AI chips**, largely by engineering for cost-effective memory bandwidth rather than raw compute power [17, 18].
*   **Anthropic has deployed over 1 million Trainium2 chips to train its Claude models**, acting as a hardware partner with heavy involvement in all design decisions for the chips [19-21].
*   **OpenAI has committed $138 billion over eight years for Trainium compute capacity**, a procurement deal tied to Amazon's $50 billion investment in the AI lab [19, 20, 22]. 
*   While Trainium2 has lower peak compute (667 TFLOP/s) than NVIDIA's GB200, **it provides a 30-40% better price-performance ratio for reinforcement learning workloads**, which are heavily memory-bound [18, 19]. 
*   The new **Trainium3 generation is 50% cheaper for inference than H100 clusters**, features a 3-nanometer process, and allows Amazon's UltraServers to link up to one million chips [19, 23, 24].
*   Despite Amazon's hardware progress, **NVIDIA maintains dominance through its highly mature CUDA software ecosystem**, whereas Amazon's Neuron SDK still requires significant porting effort for developers [25].
*   Microsoft is reportedly considering a lawsuit over the OpenAI deal, alleging it violates their exclusive cloud hosting agreements [26, 27].

### Leanstral Outperforms Claude Sonnet at Formal Code Proofs | Awesome Agents by Sophie Zhang

*   **Mistral released Leanstral, an open-source (Apache 2.0) sparse mixture-of-experts (MoE) model designed specifically for formal mathematical proofs in Lean 4** [28-30].
*   **Leanstral has 120B total parameters but activates only 6B parameters per token**, making inference significantly cheaper than dense models [29, 31].
*   **The model beat Claude Sonnet 4.6 on the FLTEval benchmark (26.3 vs. 23.7 pass@2 score) at approximately one-fifteenth the cost** ($36 vs. $549 per eval run) [28, 29, 32].
*   Unlike prior models trained on isolated math competitions, **Leanstral was trained on pull requests from realistic collaborative repositories**, such as the Fermat's Last Theorem project at Imperial College London, enabling it to understand project structures and dependencies [30, 31].
*   **Leanstral features built-in Model Context Protocol (MCP) support**, allowing it to interact directly with the local Lean 4 language server for real-time proof state feedback, drastically reducing hallucinations [33, 34].
*   While Claude Opus remains the highest-performing model for this task overall, **Leanstral completely changes the economics for teams needing volume-based formal code verification** [35].

### Nemotron-Cascade 2: 30B Open MoE, One GPU, Beats 120B | Awesome Agents by Sophie Zhang

*   **NVIDIA launched Nemotron-Cascade-2-30B-A3B, an open-weight hybrid Mamba-Transformer model** that boasts 30 billion total parameters but **activates only 3 billion parameters per token** [36, 37].
*   **The model is highly efficient, fitting onto a single 24GB RTX 4090 GPU** using Q4 quantization while offering a massive 1 million token context window [36-38].
*   Remarkably, **this 3B-active model outperforms NVIDIA's much larger Nemotron-3-Super 120B model** and outscores competitors like Qwen3.5-35B-A3B on major coding and math benchmarks [36, 39, 40].
*   It scored **92.4 on AIME 2025 and 87.2 on LiveCodeBench v6**, levels NVIDIA claims reach gold-medal performance at major math and coding competitions like IMO and IOI [37, 40].
*   **The model utilizes "Cascade RL," a sequential reinforcement learning technique** that trains on one domain at a time using the strongest available teacher models for supervision [41].
*   The model weights include both an instruct mode for fast responses and a **"thinking mode" (chain-of-thought)** for complex reasoning tasks [39, 42].
*   It is released under the NVIDIA Open Model License (which permits commercial use but is not Apache 2.0), though the SFT and RL training datasets are fully public on HuggingFace [41, 43].

### WordPress.com Opens Write Access to AI Agents via MCP | Awesome Agents by Sophie Zhang

*   **WordPress.com significantly expanded its Model Context Protocol (MCP) integration, granting AI agents full write access** to draft posts, publish pages, moderate comments, and alter site metadata [44-46].
*   **The update introduces 19 new MCP operations** across content types including posts, pages, comments, categories, tags, and media libraries [44, 46, 47].
*   The system is compatible with major MCP clients like **Claude, ChatGPT, and Cursor**, and operates via secure OAuth 2.1 tokens [47, 48].
*   Agents can query a site's theme context—including block patterns, colors, and typography—allowing them to **generate design-aware content that matches the website's existing style** [49, 50].
*   To ensure safety, **every write operation strictly requires a `user_confirmed: true` flag**, meaning the agent must describe the action and secure explicit human approval before execution [44, 47, 51]. New posts also default to draft status [47, 51].
*   Despite the guardrails, **critics note structural security concerns for autonomous multi-step agents**; persistent tokens without clear session expiries could leave a site's publishing infrastructure permanently exposed if a user forgets they authorized an agent [52-54].