A multi-agent GIS consulting firm that runs on your laptop.
Nine specialist agents: lead analyst, retrieval, processing, spatial stats, cartography, QA, reporting, publishing, peer review. They read from 134 pages of encoded methodology and 155 production scripts. You type a spatial question. They hand back styled maps, an interactive web map, an HTML report, a QGIS project, and optionally an ArcGIS Pro package.
They run sequentially. Retrieval finishes before processing starts. Processing finishes before the statistician sees it. Each stage writes a structured handoff the next stage reads. The agents work in a pipeline.
It's not magic. It's what a senior cartographer would do if they had a small team of tireless interns who each read one textbook.
The agent gets you to a styled, methodologically-sound draft in ten minutes. The last twenty minutes of cartographic judgment are still yours. That's the deal.
macOS / Linux / WSL
git clone https://github.com/spatial-machines/v0.git
cd spatial-machines
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
python demo.py
Windows (PowerShell)
git clone https://github.com/spatial-machines/v0.git
cd spatial-machines
python -m venv venv
venv\Scripts\Activate.ps1
pip install -r requirements.txt
copy .env.example .env
python demo.py
Four minutes to the demo map. Then launch your coding agent inside the repo (claude, codex, whatever you use) and ask it something real:
What does poverty look like in Cook County, Illinois?
The system picks up from there.
Fork it. The point isn't to use my system. The point is to run yours. Your data sources, your palette rules, your report templates, your domain logic.
Need a fetch script for an API your team actually uses? Ask your agent. It writes the code, registers it in the data-source catalog, documents it, wires it into the retrieval role. You review the diff. That's the loop. The system is designed to be extended by the same agents that run it.
Every customization lands in PATCH.md: intent, files touched, why. When upstream ships an update, you pull, and your agent re-applies your patches by reading the recorded intent. Your work survives my updates.
And the wiki is a textbook. 134 pages on palette choice, classification methods, MOE handling, FDR correction, colorblind checks. The tradecraft normally locked inside senior practitioners' heads. Use the tool to work. Read the wiki to grow.
No SaaS. No accounts. No telemetry. No cloud dependency beyond the public data APIs (Census, EPA, NOAA, the usual). No lock-in to any one coding agent: Claude Code, Codex, OpenCode, Cursor, Aider all work.
No "AI-powered" anything in the copy, because it's all AI-powered and saying so is meaningless. No feature matrix. No enterprise tier. No pretending the output is 10/10. The agent gets you to a styled draft fast. The last mile is still cartography, and cartography is still you.
No RAG. No vector store. No embeddings. The methodology is written plainly in markdown because that's how cartographers actually learn.
Every "AI does GIS" demo I've watched for the last year has made the same slop. Wrong palette for the data type. No margin-of-error handling on ACS. Legend titles that are raw field names. Projections nobody thought about.
So I spent six months encoding what a senior cartographer actually does (the palette rules, the MOE flags, the FDR correction on hotspots, the colorblind checks) into a system where the agent has to consult the methodology before it draws anything. It's the tool I wanted to use.
Apache-2.0. Use it commercially, fork it, build on it. The name and logo are trademarks. See TRADEMARK.md before shipping something called "Spatial Machines."