THE PROBLEM

Even the best helmsmen can't navigate these waters alone.

CRITICAL
CURRENT OPERATIONAL STACK
FRACTURED
  • Terraform / Crossplane / AWS CDK
  • Docker / Nix
  • Kubernetes / Nomad
  • Helm / Kustomize
  • ArgoCD / Flux
  • GitHub Actions / CircleCI
  • Prometheus / Datadog
  • Grafana
  • PagerDuty / Opsgenie
  • Jaeger / Honeycomb
  • IAM / Vault
NOMINAL
ULMO OPERATIONAL STACK
ULMO [ EVERYTHING ]
let relay = Service::builder("edge-relay") .replicas(5) .identity(Identity::Stable) .peers(true) .data(Data::Ephemeral) .port("default") .port("raft") .connect("verification-sink") .source(SourceBinding::github("co", "edge-relay")) .build(); let sink = Service::builder("verification-sink") .port("default") .connect("relay") .build(); let topo = Topology::builder() .add_service(relay) .add_service(sink) .add_store(Store::postgres("events-db").build()) .build()?; let output = Compiler::new().compile(&topo)?;

INFRASTRUCTURE, BY [co] DESIGN

Define your system. The topology compiler handles the rest.

[ SCENARIO TESTING ]
The topology is a value you can hold. Write tests against it.

Does the system survive an AZ failure? Is blast radius bounded? cargo test for your infrastructure.
#[test] fn validate_cluster_policies() { let topo = load_topology(); // All public services require auth for svc in topo.public_services() { assert!(svc.has_capability(AuthRequired)); } // Blast radius is bounded per team let affected = topo.blast_radius("new-service"); assert!( affected.iter().all(|svc| svc.owner() == "payments-team"), "affects services outside payments-team: {:?}", affected.outside_team("payments-team") ); // No critical services in the blast radius assert!( affected.iter().all(|svc| svc.tier() != Tier::Critical), "affects critical services: {:?}", affected.critical() ); // Fleet can actually run it assert!(prod_fleet.validate(&topo).is_ok()); }
[ CONTINUOUS INVARIANTS ]
Understanding what's running shouldn't require help from Indiana Jones.

The compiled topology is the source of truth — the control plane continuously verifies that reality matches.
#[ulmo::invariant(interval = "30s")] fn infrastructure_matches_topology( topo: &Topology, fleet: &Fleet ) -> Result<()> { // No unmanaged resources in the cluster let unknown = fleet.diff(topo).unknown(); ensure!(unknown.is_empty(), "unmanaged resources: {:?}", unknown ); // Network policies match the topology edges for edge in topo.edges() { ensure!(fleet.can_reach(edge.from(), edge.to())); } for pair in topo.denied_pairs() { ensure!(!fleet.can_reach(pair.0, pair.1)); } // Storage volumes match declared stores for store in topo.stores() { let vol = fleet.volume(store.name()); ensure!(vol.class() == store.storage_class()); ensure!(vol.encrypted()); } // DNS entries resolve to the right backends for svc in topo.public_services() { let dns = fleet.resolve(svc.fqdn()); ensure!(dns.targets().all(|t| t.in_topology(topo))); } Ok(()) }
[ CHANGE POLICIES ]
Every topology change — human or agent — runs through the same gate.

The control plane validates the diff, checks blast radius, and enforces user-defined policies before anything touches the cluster.
#[ulmo::onchange] fn validate_topology_change( diff: &TopologyDiff, fleet: &Fleet ) -> Result<()> { // Blast radius must be bounded ensure!(diff.blast_radius() <= 2, "change affects too many services" ); // Can't cross team boundaries ensure!(diff.affected_teams().len() <= 1, "change crosses team boundary: {:?}", diff.affected_teams() ); // Must be rollback-safe ensure!(diff.rollback_safe(), "change contains irreversible operations" ); // Fleet has capacity for the new state fleet.validate(&diff.proposed())?; Ok(()) }

THE INTERFACE

Three ways in. Same topology. CLI, web, or TUI.

TOPOLOGY COMPILER

You define the topology. Resources are a compilation target.

[ src/main.rs ]
// Define the system. The compiler handles the rest. use ulmo::prelude::*; let relay = Service::builder("edge-relay") .replicas(5) .identity(Identity::Stable) .peers(true) .data(Data::Ephemeral) .port("default") .port("raft") .connect("verification-sink") .source(SourceBinding::github("co", "edge-relay")) .build(); let sink = Service::builder("verification-sink") .port("default") .connect("relay") .build(); let topo = Topology::builder() .add_service(relay) .add_service(sink) .add_store(Store::postgres("events-db").build()) .build()?; // Compile to infrastructure let output = Compiler::new().compile(&topo)?;
CODESIGN Service + infrastructure in one definition. The topology is part of the source.
COMPILE-TIME Invalid topologies don't build. Constraints verified before anything touches a cluster.
AUDITABLE Blast radius, dependencies, public surface -- ask questions of the graph.
ZERO GLUE No YAML. No Helm charts. Resources are a compilation target, not a concern.

TYPED CODE. NOT CONFIG FILES.

ARCHITECTURE

The intelligence is in the compiler. The deployment substrate is fungible.

SOURCE
TOPOLOGY DEFINITION
Rust SDK or TOML sugar
Service::builder() Topology::builder() Store::postgres() SourceBinding
IR
TOPOLOGY GRAPH
Typed nodes, edges, and capability constraints
Services Stores Connections Capabilities Typed Config
VALIDATE
SEMANTIC ANALYSIS
Invalid topologies don't compile
Capability type-checking Cycle detection Blast radius Fleet compatibility
CODEGEN
PLUGGABLE BACKENDS
Same IR, different targets
TARGETS
K8S
+ Crossplane
OXIDE
+ Propolis
NOMAD
+ Consul
ECS
+ CloudFormation
BARE METAL
+ systemd
CONTROL
PLANE
BUILDER
Nix builds. Content-addressed artifacts. Deterministic.
DEPLOYER
Applies compiled manifests. Health checks. Status tracking.
PROMOTER
Canary → beta → stable. Automatic gates. Bake times.

Ulmo deploys itself through its own pipeline. It's just another Ulmo topology, same compiler.

THE PRODUCTION SOFTWARE COMPANY

Software is on the critical path for everything. Your money, your health records, your transit, your communication — you can't opt out -- none of us can.

And it's broken. Routinely. Incidents are accepted as inevitable. "AWS is down" is a meme, not a crisis. We've collectively lowered our expectations to match what the industry delivers.

This isn't because people got dumber. It's because complexity outpaced tooling. The systems are too big for anyone to hold in their head. "Move fast and break things" won, and now we're stuck with the broken things.

And it's about to get worse.

AI is generating code faster than humans can review it. The same teams that are drowning with tens of services will soon have hundreds. More deployments, more complexity, more surface area — same tools, same processes, same 3am pages.

The Production Software Company was founded by folks who've grown irate with the quality and reliability of software that has found its way onto the critical path of our daily lives.

We're not pessimists about software. The opposite — we've built and operated software for some of the world's most critical institutions, and we're big believers in the good it can do. Software should make healthcare more accessible, infrastructure more resilient, daily life less frustrating. It can. Sometimes it does.

Sometimes isn't good enough.

Ulmo is our first product: infrastructure you can understand, test, and trust. But the mission is broader. Production software should be as reliable as the things it replaced. We're building tools for people who take that responsibility seriously.

JOIN THE WAITLIST

Ship faster with fewer incidents. We'll let you know when it's ready.

EARLY ACCESS
Be first to define your infrastructure as a program. Topology compiler, typed SDK, auditable graph.