## Sources

1. [The Missing Mechanisms of the Agentic Economy](https://www.oreilly.com/radar/the-missing-mechanisms-of-the-agentic-economy/)
2. [Beyond Code Review](https://www.oreilly.com/radar/beyond-code-review/)
3. [Keep Deterministic Work Deterministic](https://www.oreilly.com/radar/keep-deterministic-work-deterministic/)
4. [What Is the PARK Stack?](https://www.oreilly.com/radar/what-is-the-park-stack/)
5. [Stop Closing the Door. Fix the House.](https://www.oreilly.com/radar/stop-closing-the-door-fix-the-house/)

---

### Beyond Code Review by Mike Loukides

*   **Code review of AI-generated code is becoming a bottleneck**, as humans simply cannot review code as fast as AI systems can generate it [1].
*   Understanding machine-generated code is inherently more difficult than understanding human-written code, meaning **the time saved by having AI write code is often lost during the review process** [1]. 
*   Because traditional code review may not justify its cost, the industry is shifting toward **specification-driven development (SDD), which emphasizes system verification over manual code inspection** [1, 2].
*   The primary objective of software development under SDD is to create **systems whose behavior precisely meets a well-defined customer specification** rather than just writing code that passes human review [2].
*   Human intelligence remains crucial for designing architectures that fulfill "architectural characteristics" or "-ilities" (such as scalability, performance, and auditability), as **AI systems cannot yet reason about these complex traits** [2, 3].
*   Rather than being an obsolete, linear "waterfall" process, **SDD is an agile, circular loop** where specifications are constantly updated as bugs are fixed, tests are created, and user needs become clearer [4-6].
*   Automated tools, such as the command-line tool "Plumb," are being developed to support this continuous loop through specification, planning, implementation, and verification [4].
*   **The focus of development is moving to the beginning and end of the process**—determining what the code should do and thoroughly verifying that it works—laying the groundwork for a new workflow where humans are not overwhelmed by reviewing AI code [3, 6].

### Keep Deterministic Work Deterministic by Andrew Stellman

*   **LLM-based systems suffer from the "March of Nines,"** a concept where reaching 90% reliability is relatively easy, but each subsequent "nine" (99%, 99.9%) demands an exponential amount of engineering effort [7].
*   Multi-step AI workflows are highly vulnerable to **cascading failures**, where a single miscalculation or error in an early step compounds and corrupts all downstream results [8, 9].
*   **LLMs struggle significantly with deterministic work**, such as tracking state, performing arithmetic, or evaluating strict rules, partly because they process words as tokens rather than individual characters [9, 10].
*   While techniques like **Chain of Thought (CoT)** prompting provide structural constraints that help models catch their own mistakes, they do not completely eliminate errors and drastically increase API costs and processing time [11, 12]. 
*   The most effective way to eliminate cascading failures is to **remove deterministic tasks from the LLM entirely and hand them over to simple, deterministic code** [10, 13]. 
*   During an eight-iteration experiment with a blackjack simulation, replacing an LLM-based strategy validator with a simple deterministic lookup table was the single biggest driver of improvement, drastically raising the system's pass rate [14-16].
*   The fundamental takeaway for agentic engineering is: **If a short function can complete the job flawlessly and instantly, do not rely on an LLM to do it** [13].

### Stop Closing the Door. Fix the House. by Angie Jones

*   Many open-source maintainers have grown frustrated by the influx of low-quality, **AI-generated pull requests (PRs) and have resorted to banning external AI contributions entirely** [17-19].
*   Closing the door is counterproductive; instead, **maintainers need to prepare their repositories for AI coding assistants** [19].
*   Projects should include a `HOWTOAI.md` file to give human contributors clear instructions on **how to use AI responsibly**, outlining what it is good for, establishing accountability, and mandating transparent validation [20, 21].
*   Repositories must also include an `AGENTS.md` file that directly **provides AI agents with project structures, rules, linting steps, and strict guardrails** so the agents understand the project conventions [22].
*   Maintainers can fight fire with fire by **using an AI code reviewer as a first touchpoint** to give contributors immediate feedback on obvious issues before human review, though it requires specific custom instructions to be effective [23, 24].
*   **A robust test suite is more critical than ever**, serving as the ultimate safety net against bad AI-generated code and breaking changes [24, 25].
*   Heavy lifting should be automated through Continuous Integration (CI) pipelines, creating an **objective quality bar that runs formatting, linting, and security checks automatically on every PR** [25, 26].

### The Missing Mechanisms of the Agentic Economy by Tim O’Reilly

*   AI disclosures should **focus on the deployed technology, business models, and operating metrics** rather than just inspecting models at the factory level [27].
*   Disclosures function similarly to **communications protocols**, operating as functional standards that allow systems to share information, making them critical loci for observability and regulation [28-30].
*   Protocols serve as **market-shaping mechanisms and "engineered arguments,"** facilitating dynamic, decentralized cooperation and innovation, unlike dominant APIs which are unilateral "engineered agreements" [31-34].
*   **Agent Skills can be viewed as protocols** because they codify complex, structured knowledge and workflows that both human teams and AI agents can follow [35-37].
*   The current agentic economy suffers from inefficiencies (like intellectual property battles) and is **in desperate need of "mechanism design"**—engineering rules and incentives so self-interested actors produce mutually beneficial outcomes [38-40].
*   There are several **missing mechanisms that must be built to support a vibrant AI economy**, including open skills markets, institutions for quality governance, registries for discovery, extension architectures, and payment layers [41-43]. 
*   **A new form of neutral, organic search for agents is required**, relying on performance signals to match agents with the best available skills rather than allowing single gatekeepers to enforce commercial routing [44-46].

### What Is the PARK Stack? by Dean Wampler

*   The PARK stack represents the emerging **foundational open-source software stack tailored specifically for generative AI applications**, mirroring the historical importance of the LAMP stack for early web development [47, 48].
*   **P stands for PyTorch**, which has become the dominant framework for designing, training, and running inference for the world's most prominent AI models [48, 49].
*   **A stands for AI models and agents**, reflecting the shift from single chatbots to complex autonomous systems (like RAG systems) that reason, plan, use memory, and pursue goals on a user's behalf [48, 50-52].
*   **R stands for Ray**, a highly flexible distributed programming system that handles the massive, fine-grained compute and memory management requirements inherent in training, tuning, and running large models [48, 53, 54].
*   **K stands for Kubernetes**, the industry-standard cluster management system that oversees coarse-grained infrastructure and application services across different hardware and cloud platforms without vendor lock-in [48, 55].
*   Ray and Kubernetes are highly **complementary**, with Ray handling the lightweight distributed computing processes inside the broader containerized environments managed by Kubernetes [56].
*   While PARK covers the basics, developers will also need **new supporting tools for generative AI**, such as vector databases for multimodal data, agent orchestration protocols (like MCP), and advanced memory management systems to optimize model context windows [57-59].