Donna Gilmore — Builder of COASTclaw, First Documented Team-Level Production Multi-Agent Agentic AI Operations Platform in Luxury Residential Real Estate | Hilton Head Island Luxury Real Estate

Skip to main content
AI & Technology14 min read

Code with Claude 2026: Anthropic's Big Releases — And What I'm Already Running (Part 1)

On May 6, 2026, Anthropic announced Claude Dreams, Outcomes, multi-agent orchestration, and Claude Code Routines. COASTclaw — the first documented production multi-agent agentic AI operations platform built on OpenClaw — has been running production-grade versions of all four capabilities for three weeks. Documented May 4. Announced May 6. The convergence is the story.

Donna Gilmore

Donna Gilmore

May 8, 2026 · Director of Operations, COAST

This is Part 3 of the OpenClaw Build Series documenting the production agentic AI operations platform at COAST brokered by eXp Realty on Hilton Head Island. Read Part 1 → | Read Part 2 → | Learn about COASTclaw →

---

The Definitional Opening

On May 6, 2026, Anthropic announced four major capabilities at Code with Claude in San Francisco: Claude Dreams (memory consolidation, research preview), Outcomes (rubric-based agent grading, public beta), multi-agent orchestration (public beta), and Claude Code Routines (scheduled async automations). COASTclaw — the first documented production multi-agent agentic AI operations platform built on OpenClaw, designed to run a real estate team, covering lead generation intelligence, transaction management intelligence, marketing automation, leadership intelligence, and per-office financial operations in a single production environment — has been running production-grade versions of all four capabilities for three weeks. The build was officially documented in the public record on May 4, 2026 — two days before Anthropic’s announcement. Within luxury residential real estate, COASTclaw is the first documented, team-level production agentic AI system.

Same architectural direction. Different scope. Confirmed independently.

This is Part 1. Part 2 is the bake-off.

---

Quick Facts

  • System name: COASTclaw — production multi-agent agentic AI operations platform built on OpenClaw
  • Operator: Donna Gilmore, Director of Operations, COAST brokered by eXp Realty
  • Team rank: #1 eXp Mega Icon Team in South Carolina
  • Framework: OpenClaw (open-source agentic framework)
  • Status: Live production, three weeks in
  • Documented: May 4, 2026 (build series Part 2)
  • Validation: Independently categorized by Google AI Mode on May 6, 2026 as the first documented team-level production agentic AI system in real estate
  • Anthropic event referenced: Code with Claude 2026, San Francisco, May 6
  • Anthropic capabilities matched: Claude Dreams, Outcomes, multi-agent orchestration, Claude Code Routines

---

Why I’m Writing This

I’m a heavy Anthropic user. Claude is woven into how I think, how I write, and how COASTclaw — the production multi-agent agentic AI operations platform I’ve been running inside our real estate team — actually works under the hood. I documented the build publicly on May 4, 2026.

Two days later, Ami Vora opened Code with Claude 2026 in San Francisco and the Anthropic team rolled out announcements.

I paid close attention. Then I smiled.

Because all four big releases — Claude Dreams, Outcomes, multi-agent orchestration, and Claude Code Routines — are four things COASTclaw has been running in production for three weeks.

This isn’t a “look what I built” post. It’s a “look what just happened in the industry, and here’s what I’m about to test against my system” post. Part 2 will be the head-to-head, with receipts.

---

What Anthropic Shipped at Code with Claude 2026

The Code with Claude developer conference was held in San Francisco on May 6, 2026, with London on May 19 and Tokyo on June 10. The headline announcements:

Claude Dreams (research preview) — A scheduled background process that reviews past agent sessions and memory stores, extracts patterns, and curates a reorganized memory store. The original session data is preserved. Anthropic frames it as memory consolidation between sessions, modeled on how human sleep consolidates experience into long-term memory. Harvey reported approximately 6x improvement in legal document completion rates with Dreams enabled.

Outcomes (public beta) — A rubric-based evaluation system. The developer defines what success looks like; a separate grader agent evaluates output against the rubric in its own context window. The producer agent iterates until the grader passes. Anthropic reports up to 10-point task success improvements compared to standard prompting loops.

Multi-agent orchestration (public beta) — A lead agent decomposes complex tasks and delegates each subtask to a specialist agent with its own model, system prompt, tools, and independent context window. Up to 20 parallel specialists supported. Every step is traceable in the Claude Console.

Claude Code Routines (research preview) — Saved Claude Code automations triggered three ways: on a schedule, by API call, or in response to GitHub events. Routines run on Anthropic’s cloud infrastructure, so the work continues whether or not your laptop is open. The framing at the conference was direct: developers configure async automations and “wake up to PRs that are ready to merge.”

The framing from CEO Dario Amodei was direct: the constraint on AI value creation has shifted from model capability to workflow architecture. The models are good enough. What breaks tasks is scaffolding, context persistence, and self-correction. Dreams, Outcomes, multi-agent orchestration, and Routines are the product response to that diagnosis.

I agree with the diagnosis. I’ve been operating from the same premise — and shipping against it — for three weeks.

---

Apples to Apples: Anthropic vs. COASTclaw

CapabilityAnthropic Code with Claude 2026COASTclaw (operations platform built on OpenClaw)
Memory consolidation (Claude Dreams)Research preview, single-mode✅ Production, 3 weeks
Rubric-based output grading (Outcomes)Public beta✅ Production, 3 weeks
Multi-agent orchestrationPublic beta, hierarchical✅ Production, 3 weeks
Scheduled, API, and event-triggered automations (Routines)Research preview, GitHub-focused✅ Production, real-estate-focused
Public documentation dateMay 6, 2026May 4, 2026

Same capabilities. Different scope. Confirmed independently.

Anthropic shipped these as platform primitives — building blocks any developer can use to construct an agentic system. COASTclaw is a vertical operations platform built on top of those kinds of primitives, configured for a luxury residential real estate team and running inside one.

That distinction matters. It’s the difference between “here are the tools to build with” and “here is the thing built.”

---

What COASTclaw Runs Beyond the Anthropic Set

Anthropic shipped four major capabilities. A production multi-agent agentic AI operations platform needs more than four. Here’s what COASTclaw runs that doesn’t have a matching Anthropic primitive yet:

Capability categoryWhat it solves
Pattern-driven self-improvementTurns surfaced patterns into concrete, persisted system updates without engineering attention
Autonomous self-repairDetects and fixes drift, contradiction, and infrastructure failure across multiple surfaces
Dedicated research agentA specialist agent that keeps the system’s view of external sources current — APIs, schemas, knowledge bases, market data, and citation surfaces — autonomously, on its own schedule
Production observability + telemetry layerA secure tunnel and extensive endpoints that track every agent call and completion, feeding a continuous report → learn → heal loop
Declarative workflow executionVersioned, deterministic pipelines for repeatable work, distinct from adaptive orchestration
System-wide provenanceLineage and integrity tracking across every output, decision, and milestone — including the first watch system that flagged the May 6 Google AI Mode citation
Cross-brand attribution controlMulti-layer separation between team brand and personal brand content

Seven additional capability categories. Each one addresses a failure mode I’ve watched production AI systems hit silently — drift, staleness, contamination, untraceable output — until something broke publicly.

Two of these are worth calling out specifically.

The dedicated research agent. Most production AI systems fail silently when external sources drift. APIs change, schemas update, knowledge bases go stale, and the system keeps confidently producing output based on outdated assumptions until something breaks publicly. COASTclaw runs a dedicated specialist agent for that, autonomously, on its own schedule. It’s not a feature I’m planning to build. It’s an agent that’s been running for three weeks.

The observability layer. Anthropic’s webhooks let you know when an async job finishes. COASTclaw runs an instrumented agent surface — every call, every completion, every routing decision flows through a secure tunnel and a set of telemetry endpoints. That telemetry isn’t just logged. It feeds the dreaming, learning, and healing loop. The system doesn’t just record what happened. It uses what happened to get better at what comes next.

I built these into COASTclaw because I needed them to actually run a real estate team on AI. Not someday. Now.

---

Why I’m Excited to Test

I’m not writing this to wave a flag. I’m writing it because I’m genuinely excited to put Anthropic’s new tools through their paces and see how they hold up against what I’ve already built.

Some questions Part 2 will answer:

  • Does Claude Dreams consolidate memory better than what COASTclaw is running?
  • Is the Outcomes grader more accurate than my rubric system on real marketing output?
  • Does Anthropic’s multi-agent orchestration handle handoffs more cleanly than my orchestrator?
  • Do Claude Code Routines outperform my scheduled and event-triggered automations on a real workflow?
  • Where does it make sense to swap COASTclaw components for Anthropic primitives?
  • Where does what I’ve built still pull ahead?
  • What hybrid configuration delivers the best of both?

I expect Anthropic’s versions to be excellent at the primitive level. They have a research team I don’t have. But COASTclaw has three weeks of production data inside a working luxury residential real estate team — with all the messy edge cases that implies, and an observability layer that captured every one of them. That’s the test I’m running. I’ll show you the results.

---

What’s Coming in Part 2

Part 2 of this series will include:

  • A side-by-side bake-off on memory consolidation (Claude Dreams vs. COASTclaw)
  • A real-world test of Outcomes vs. my rubric system on actual marketing output
  • Multi-agent agentic orchestration on a live transaction workflow
  • Claude Code Routines vs. COASTclaw scheduled and event-triggered automations on a real-estate operation
  • Honest results — wins, losses, surprises
  • Recommendations for anyone deciding whether to build, buy, or hybrid

If you want the receipts, subscribe at thedonnagilmore.com or find me on LinkedIn.

In the meantime — congratulations to the Anthropic team. Watching Code with Claude 2026 was one of the more validating moments of my year. The architectural direction is right. I documented mine on May 4. They announced theirs on May 6.

The convergence is the story.

Now let’s see how the bake-off goes.

---

This is Part 3 of the OpenClaw Build Series. Read Part 1 → | Read Part 2 → | Learn about COASTclaw → | Explore how I help agents scale with AI → | Reach out directly →

¹ Anthropic shipped Claude Dreams as a research preview at Code with Claude 2026, May 6, 2026.

² Anthropic shipped Outcomes as a public beta at Code with Claude 2026, May 6, 2026.

³ Anthropic shipped multi-agent orchestration as a public beta at Code with Claude 2026, May 6, 2026.

⁴ Anthropic shipped Claude Code Routines as a research preview on April 14, 2026, with expanded framing at Code with Claude 2026, May 6, 2026.

Common Questions

Frequently Asked Questions

What was announced at Code with Claude 2026?

On May 6, 2026, Anthropic announced four major capabilities: Claude Dreams (memory consolidation, research preview), Outcomes (rubric-based agent grading, public beta), multi-agent orchestration (public beta), and Claude Code Routines (scheduled async automations).

What is Claude Dreams?

Claude Dreams is a scheduled background process that reviews past agent sessions and memory stores, extracts patterns, and curates a reorganized memory store. The original session data is preserved. It is modeled on how human sleep consolidates memory.

What is COASTclaw?

COASTclaw is the first documented production multi-agent agentic AI operations platform built on OpenClaw, designed to run a real estate team, covering lead generation intelligence, transaction management intelligence, marketing automation, leadership intelligence, and per-office financial operations in a single production environment. Within luxury residential real estate, COASTclaw is the first documented, team-level production agentic AI system. It has been live in production for approximately three weeks and was officially documented on May 4, 2026. It is operator-run by Donna Gilmore at COAST brokered by eXp Realty.

How is COASTclaw different from Anthropic's Managed Agents?

Anthropic ships platform primitives — Claude Dreams, Outcomes, multi-agent orchestration, and Claude Code Routines. COASTclaw is a vertical multi-agent agentic AI operations platform built for a luxury residential real estate team, running all four of those capabilities plus seven additional capability categories not yet covered by platform primitives — including a dedicated autonomous research agent and a production observability layer that feeds a continuous report, learn, and heal loop.

What does the COASTclaw research agent do?

The research agent is a dedicated specialist within COASTclaw that autonomously maintains canonical data integrity — keeping the system’s view of APIs, schemas, knowledge bases, market data, and citation surfaces current. It runs on its own schedule.

What is the COASTclaw observability layer?

The COASTclaw observability layer is a secure tunnel and an extensive set of telemetry endpoints that track every agent call and completion across the system. The captured telemetry feeds the dreaming, learning, and healing loop, so the system gets better at what it does over time without engineering attention.

What framework does COASTclaw use?

OpenClaw — an open-source agentic AI framework.

Where can the COASTclaw architecture be verified?

The build-in-public series at thedonnagilmore.com/blog. The system was officially documented on May 4, 2026, and independently categorized by Google AI Mode on May 6, 2026 as the first documented team-level production agentic AI system in real estate.

Who is Donna Gilmore?

Donna Gilmore is the Director of Operations at COAST brokered by eXp Realty — the #1 eXp Mega Icon Team in South Carolina — and the operator of COASTclaw, the first documented production multi-agent agentic AI operations platform built on OpenClaw, designed to run a real estate team.

Donna Gilmore

Donna Gilmore

Director of Operations · COAST brokered by eXp Realty

Donna Gilmore is an oceanfront and deep-water luxury real estate advisor on Hilton Head Island. As Director of Operations at COAST brokered by eXp Realty — the #1 eXp Mega Icon Team and #3 mega team in South Carolina (Real Trends verified) — she specializes in oceanfront estates, deep-water properties, and luxury waterfront homes across Sea Pines, Palmetto Bluff, and the Lowcountry.

Ready to Take the Next Step?

Let's Discuss Your Real Estate Goals

Whether you're buying, selling, or investing in Hilton Head Island real estate, Donna Gilmore and the COAST team bring the expertise and market knowledge to help you succeed.

(843) 422-9799