CommandoCommando

Commando

Natural-language CLI agent for the Sui ecosystem. One install on Windows, Linux, or macOS - zero config, grounded intelligence.

Commando

One Command. Zero Config. Grounded Intelligence.

Commando (cmdo) is a local-first CLI agent that turns plain-English (or Vietnamese!) intent into safe, validated sui / walrus / site-builder commands. It downloads the upstream Mysten Labs binaries for you, bootstraps the wallet and config files you would otherwise hit as paper-cuts on first run, and uses an LLM grounded in live --help output to plan each command.

Current release: v0.2.4-beta — cross-platform (Windows / Linux / macOS), GitHub-direct binary downloads, no secrets shipped in the npm tarball.

Why Commando

Zero setup

One npm install puts sui, walrus, and site-builder on your PATH and bootstraps every config file they need.

Grounded LLM

The planner is constrained to commands and flags parsed from each binary's --help output, with strict allowlist validation.

Cross-platform

Same one-line install on Windows (PowerShell), Ubuntu, and macOS (Intel + Apple Silicon).

Safe by default

A safety gate blocks destructive intent (rm -rf /, format, dd if=, shutdown, ...) before any process is spawned.

60-second tour

# 1. Install (any OS, Node 20+)
npm install -g sui-commando@beta

# 2. Configure your LLM provider once
cmdo init

# 3. Talk to the Sui ecosystem
cmdo "create new sui address"
cmdo "give me testnet sui from faucet"
cmdo "build my move package"
cmdo "deploy static site in ./dist to walrus-sites" --site-builder

That's the whole loop. No sui client switch --env testnet, no manual walrus get-wal, no chasing the right site-builder deploy flags — Commando plans the command, validates it against the live skill contract, and streams the output back.

How it works

  1. Router picks the target binary from explicit flags (--sui, --walrus, --site-builder) or keyword inference.
  2. Skill loader trims ~/.commando/skills/AGENT.md (auto-generated from --help) to only the relevant tool.
  3. LLM planner (OpenAI or OpenRouter) emits structured { binary, args }. The output is parsed defensively and the command path is validated against the allowlist; hallucinated subcommands trigger a retry instead of being executed.
  4. Safety gate rejects destructive patterns on every supported OS.
  5. Execution engine spawns the real binary, streams stdout/stderr live, and matches the stderr tail against well-known failure patterns to print actionable hints (e.g., "missing WAL coins → try cmdo \"get wal\"").

Read the full design in Architecture.

What ships in v0.2.4-beta

AreaStatus
Windows x86_64Stable
Linux x86_64Stable
Linux aarch64Stable (site-builder skipped — no upstream build)
macOS x86_64 / arm64Stable
Auto-bootstrap Sui walletYes (~/.sui/sui_config/client.yaml)
Auto-bootstrap Walrus configYes (~/.config/walrus/client_config.yaml, testnet default)
LLM providersOpenAI, OpenRouter
Mock plannerCMDO_LLM_MOCK=1 for offline demos
Secret-free tarballYes — operator credentials read from env vars only

Where to go next

How is this guide?

On this page