An intermediate-level TypeScript framework for building web applications with Rhino Compute and Grasshopper.
@selvajs/compute simplifies the process of communicating with Rhino Compute, handling Grasshopper definitions, and visualizing results in the browser with Three.js.
npm install @selvajs/compute three
(Note: three is a peer dependency if you use the visualization features)
@selvajs/compute provides a type-safe, production-ready foundation for building with Rhino Compute:
GrasshopperClient for one-off solves, client.createScheduler() for any UI that fires solves frequently.AbortSignal, exponential-backoff retries on transient errors, and Retry-After honored on 429.latest-wins scheduling aborts stale solves when newer values arrive. Optional response cache makes repeated inputs instant.initThree() and configurable rendering options.Whether you're building a simple solver, a slider-driven configurator, or a long-running job submission flow, @selvajs/compute handles the plumbing so you can focus on your Grasshopper definitions.
What this is not: a job queue. For solves longer than a couple of minutes, run this library server-side behind your own queue (BullMQ / SQS / Cloud Tasks) and expose a status endpoint to the browser.
Note: The library currently focuses on the Grasshopper endpoint but is designed to support other Rhino Compute endpoints in future releases.
Every solve in @selvajs/compute goes through a scheduler. The scheduler
handles cancellation, retries, loading state, and (optionally) a response cache
— things every real app needs and shouldn't have to rebuild.
import { GrasshopperClient, TreeBuilder, GrasshopperResponseProcessor } from '@selvajs/compute';
const client = await GrasshopperClient.create({
serverUrl: 'http://localhost:6500',
apiKey: 'your-api-key'
});
// Configure the scheduler for your workload (see "Configuring the scheduler" below).
const scheduler = client.createScheduler({ mode: 'latest-wins', timeoutMs: 30_000 });
// Inspect the definition's inputs once, build a data tree.
const io = await client.getIO('my-definition.gh');
const inputTree = TreeBuilder.fromInputParams(io.inputs);
// Solve. Returns a Promise — call it as often as you like.
const result = await scheduler.solve('my-definition.gh', inputTree);
const { values } = new GrasshopperResponseProcessor(result).getValues();
Wire the scheduler's state into your UI for spinners and disabled buttons:
scheduler.subscribe(() => {
showSpinner = scheduler.isSolving;
disableSubmit = scheduler.hasPending;
});
And handle expected cancellations gracefully — when newer values supersede an in-flight solve, or when the user aborts:
scheduler.solve(definition, inputTree).catch((err) => {
if (/superseded|aborted/i.test(err.message)) return; // expected, not an error
showError(err);
});
The scheduler is one API with two knobs that matter — mode and timeoutMs —
plus a couple of optional ones. Pick the row that matches what the user is
doing in your UI:
| Workload | mode |
timeoutMs |
retry |
Notes |
|---|---|---|---|---|
| Slider scrubs / live previews | 'latest-wins' |
30_000 |
default | Aborts in-flight solves when newer values arrive. Add cache: { ttlMs: 60_000 } for instant repeats. |
| Submit / long-running jobs | 'queue' |
0 (no timeout) |
{ attempts: 1 } |
Serial queue. Pass a caller signal so users can hit Cancel. Bump proxy idle timeouts (see below). |
| Background / batch parallel | 'parallel' |
60_000 |
{ attempts: 2 } |
Fires solves concurrently up to maxConcurrent (default 4). |
You can create multiple schedulers from one client — typically one per UI surface. They share the connection pool but their queues, cancel scopes, and caches are independent:
const previewScheduler = client.createScheduler({ mode: 'latest-wins', timeoutMs: 30_000 });
const submitScheduler = client.createScheduler({ mode: 'queue', timeoutMs: 0, retry: { attempts: 1 } });
Pass a per-call signal to cancel just that solve, or call cancelAll() to
cancel everything (e.g. on route change or component unmount):
const ctrl = new AbortController();
scheduler.solve(definition, tree, { signal: ctrl.signal });
// Later:
ctrl.abort(); // cancel just this call
scheduler.cancelAll(); // cancel everything in flight + pending
scheduler.dispose(); // cancel everything and tear down the scheduler
Cloudflare's default idle timeout is 100s; AWS ALB's is 60s; nginx is 60s. If your Compute server is behind any of them, those values must be bumped before you can run long solves through the browser — the library cannot work around proxy timeouts.
For solves longer than ~2 minutes, the safer architecture is to run this library server-side behind your own job queue (BullMQ / SQS / Cloud Tasks) and expose a status endpoint to the browser.
@selvajs/compute works with both standard Rhino Compute and enhanced versions:
Standard Rhino Compute – The official McNeel repository works for basic Grasshopper solving with core features.
Enhanced Setup (Recommended) – Unlock advanced features:
groupName propertyFeatures requiring the enhanced setup will be clearly marked in the documentation.
Network error: Failed to fetchThe browser couldn't reach the server. Check, in order:
curl http://localhost:6500/healthcheck should return
a 200.Access-Control-Allow-Origin. Standard Rhino Compute
does not ship with CORS enabled; you'll need to put it behind a proxy
that adds the headers, or use the VektorNode custom branch.apiKey is missing for a
server that requires one (the server typically returns 401 with no CORS
headers, which the browser surfaces as a network error).The bottleneck is almost always a proxy in front of Compute, not the library. Common culprits:
idle_timeout attribute.proxy_read_timeout and proxy_send_timeout.For solves longer than ~2 minutes, prefer running this library server-side and exposing your own job-status endpoint to the browser. Direct browser → Compute is fine for short solves but fragile for long ones.
Definition URL/content is requiredYou called client.solve('', tree) or passed a Uint8Array of length 0.
Validate your input before calling.
apiKey (RhinoComputeKey header) is missing or
invalid. Standard Rhino Compute uses this scheme.authToken (Bearer) was rejected by an upstream
proxy/API gateway. The Compute server itself almost never returns 403.The error message includes the response body excerpt so you usually get a hint from the server itself.
That's the scheduler doing its job in latest-wins mode — every aborted slider
solve rejects with this message. Filter it out:
scheduler.solve(def, tree).catch((err) => {
if (/superseded|aborted/i.test(err.message)) return; // expected, not an error
showError(err);
});
The dynamic import of the visualization layer threw. Make sure three is
installed (npm install three) — it's a peer dependency, not a direct one.
This library is built on production experience and draws from several official McNeel repositories. Where code has been adapted, it is clearly marked in the relevant files.
Key References:
MIT