Adam went to the Apple Store. He came back with a coworker.

A few weekends ago, Adam Brotman bought a Mac Mini and spent part of his Sunday doing something most CEOs never do: opening Terminal.

A few hours later, he had a new teammate named Jeffrey — an AI agent running on OpenClaw that now texts him research briefs, monitors his daughter’s school calendar, and sends reminders about things that would otherwise slip through the cracks.

Not when prompted.
Not when asked.
On its own.

That’s the shift.

This week on AI First with Adam and Andy, we unpack what happened, what it means, and why the move from chatbot to agent may be the biggest change yet in how we work.

From chatbot to coworker

We’ve already had our “holy sh*t” moment with tools like Claude Cowork. Those systems are impressive — they can use a computer, navigate software, fill out forms, and complete tasks.

But they still wait for instructions.

What Adam wanted to know was:
What happens when it stops waiting and starts acting more like a teammate than a tool?

That’s what made Jeffrey different.

OpenClaw let Adam configure an agent that could proactively do work in the background — checking sources, monitoring calendars, remembering instructions, and reporting back without being asked each time.

That’s not a better chatbot.
That’s the early shape of a digital coworker.

What Adam actually built

The setup was intentionally simple — and intentionally contained.

Adam used a fresh Mac Mini with no personal logins, no sensitive files, and no connected accounts he wouldn’t want exposed. On top of that, he installed OpenClaw and configured:

  • Model: Claude Sonnet 4.6

  • Communication: Telegram

  • Memory: Built-in logs and long-term memory

  • Permissions: Mostly read-only, observation mode

Then he gave Jeffrey a few jobs:

  • Send twice-daily research briefs

  • Monitor his daughter’s school calendar

  • Keep tabs on ongoing research topics

  • Send reminders and updates automatically

And then it started doing exactly that.

Adam began getting messages from Jeffrey during the day without prompting it. That was the moment the future became obvious.

The honest truth: it’s still early

This technology is exciting. It is also messy.

The setup took time. It required API keys, troubleshooting, security decisions, and the patience to work through early-stage bugs. And like every LLM-based system, the agent still makes mistakes. It can hallucinate actions, misunderstand instructions, or report that it did something it didn’t actually do.

That matters more when the system is acting on its own.

Which is why Adam’s advice is clear: do not give these agents broad access to sensitive systems yet.

Not your primary inbox.
Not your core calendar.
Not anything where one bad action could create real damage.

His approach — isolated device, limited permissions, read-only access — is the right model for experimentation.

If you do one thing

Don’t rush to build your own OpenClaw setup unless you’re technical and understand the risks.

Instead, find someone who has built one and ask them to show you how it works.

Watch the agent.
See the messages come in.
Understand what proactive AI actually looks like.

Because once you see it, the shift becomes hard to unsee.

And if you want to see Adam’s agent in action, join us at our next Monthly AI First Community Call.

2026 is the year of the agent.
The water is rising. Time to learn to swim.

Onward.

Stay Curious. Stay AI-First.

Forum3 | Helping leaders build AI-First organizations that grow, adapt, and win.

Reply

Avatar

or to participate

Keep Reading