Maestro — Graph RAG orchestration engine (FastAPI + React + pgvector)
Hi HN, I’m Minwoo, a solo developer from Seoul. Over the past two months I built Maestro, a deterministic orchestration engine that tries to replicate human judgment — not through LLM reasoning loops, but through a Graph RAG that stores and replays how decisions are made.
What it does
Maestro turns every decision (persona, campaign, trend, draft, publication, comment) into a graph node with embeddings in PostgreSQL pgvector. Each connection (edge) represents a relationship — “produces,” “comment_on,” “related_trend,” etc.
When a user asks,
“Show me drafts related to Trend X,” Maestro: 1. Embeds the query using multilingual-e5-base 2. Retrieves the nearest Trend node via vector similarity 3. Expands along relevant edges (related_trend → draft → publication) 4. Surfaces contextual summaries and KPIs — no extra LLM calls required.
The result is a reasoning-aware search engine that rebuilds why something happened, not just what.
How it works
Backend: FastAPI + Celery + PostgreSQL (pgvector) + Redis + SeaweedFS
Frontend: React 19 + Vite + Zustand + shadcn/ui
AI layer: Hugging Face multilingual-e5-base embeddings
Graph RAG: 9 node types, 7 edge types, deterministic traversal
Architecture: DAG executor (idempotent flows) + self-generative adapters
Every action is an idempotent operator. Flows are chained deterministically through a DSL → DAG pipeline. When two flows’ input/output types match, the system connects them automatically — new features emerge with zero code growth.
Example: comments.search + template.create → "Search comments and create reaction template" No new logic written, just an adapter between existing flows.