3 min read

Building an Arcade with AI

A practical look at the workflow behind MBOP Arcade: what the AI agents handled, what the human operator still had to decide, and why the process works best with boundaries.

AI WorkflowArcadeBuild Log

The interesting part of MBOP Arcade is not just that AI helped write code. Plenty of demos can say that. The useful part is the operating model around it.

The arcade works best when each lane has a clear owner:

  • The arcade shell owns accounts, deployment, shared navigation, economy rules, and game mounting.
  • Standalone game repos own their runtime code, assets, and game-specific release notes.
  • The shared agent team owns planning, handoffs, task packets, and coordination between models.
  • The human operator owns product judgment, scope, taste, and final approval.

That structure keeps the project from turning into a pile of clever fragments.

What AI handled well

AI agents are good at turning a scoped packet into a first working pass. They can scaffold routes, wire repeated UI patterns, write documentation, and translate an idea into a runnable vertical slice quickly.

They are also good at carrying context forward if the handoff is written clearly. A future agent can read the repo instructions, the task packet, and the current PM notes, then continue without needing a long oral history.

The win is not magic. It is compression. A task that might have taken several hours of setup can become a focused review cycle if the agent has the right boundaries.

What still needs a human

AI does not automatically know what matters most. It can overbuild, rename things at the wrong layer, or make a plausible feature that does not fit the product.

The human operator still has to say:

  • This belongs in the arcade shell, not the game repo.
  • This should be a blog post, not a hidden note.
  • This is too generic; make it a practical asset.
  • This feature can wait because another Smith is already working that lane.

That last point is especially important. Parallel AI work is powerful only when the agents do not step on each other. If one Smith is building Chronicle of Embers features, another Smith should not quietly mutate the same repo while building the blog.

The pattern that seems to work

The strongest workflow so far is:

  1. Write down the target and boundaries.
  2. Ask an architecture agent to check the plan.
  3. Convert the plan into a task packet.
  4. Let the execution agent build in a contained repo or branch.
  5. Verify with real commands and screenshots where visual behavior matters.
  6. Leave a handoff before stopping.

The blog exists partly to make that pattern public. Each build note should show the actual tradeoff, not a polished fairy tale about effortless AI work.