alpha · v0.2.0 · not for production
typescript · effect · infrastructure as code

Zero → production.

Pure TypeScript Infrastructure as Code. For frontends, backends, and the cloud they run on.

Get started Tutorial GitHub
plan · deploy · destroy

Plan, deploy, destroy.

Declare resources inside an Alchemy.Stack. alchemy plan shows you exactly what will change. deploy applies it. destroy reverses it. Learn more →

alchemy.run.ts
// alchemy.run.ts — the entrypoint
export default Alchemy.Stack(
  "MyApp",
  { providers: Cloudflare.providers() },
  Effect.gen(function* () {
    yield* Photos;
    yield* Sessions;
    const api = yield* Api;

    return { url: api.url };
  }),
);
Stack one TypeScript program
~/my-app
$
Resources live in your accounts
type-safe bindings · least-privilege iam

The IAM policy writes itself.

Least-privilege IAM, generated from how you use resources in TypeScript. Connect a Worker to an R2 bucket, a DynamoDB table, an SQS queue — Alchemy emits the exact policy. Subscribe to a stream and you get the EventSourceMapping with the right permissions, automatically. Learn more →

src/JobApi.ts
export default class JobApi extends AWS.Lambda.Function<JobApi>()(
  "JobApi",
  Effect.gen(function* () {
    const get = yield* S3.GetObject.bind(Photos);
    const put = yield* DynamoDB.PutItem.bind(Jobs);
    yield* DynamoDB.stream(Jobs).process(handler);
    // handler uses get / put / stream …
  }),
) {}
Photos
S3.Bucket
Jobs
DynamoDB.Table
Jobs.stream
EventSourceMapping
alchemy dev · hybrid local + remote

Workers run locally. Resources run live.

alchemy dev deploys your R2 buckets, KV namespaces, D1 databases — the actual cloud resources — and runs your Worker code locally in workerd, the same runtime as production. Edit a file, the Worker reloads. Add a resource, alchemy diffs and wires it. Learn more →

~/my-appDEV
No emulation. No fidelity gaps.
Your R2 bucket is a real R2 bucket. Your D1 database is a real D1 database. Only your Worker runs locally.
Hot reload in milliseconds
Edit a handler, save — the running instance swaps in under 100ms. State on bound resources persists across reloads.
Resource graph hot-deploys
Add a Queue or change a binding — alchemy detects the diff, creates the resource, rewires the Worker. No restart.
Attach a debugger
Workers run in workerd as a local process. Set breakpoints, inspect variables, profile.
iac in your tests

Tests deploy. Tests destroy. One stack per suite.

A Stack is just an Effect, so you can yield it from a test. deploy in beforeAll, destroy in afterAll. Each suite gets its own stage; runs in parallel without collision. Learn more →

test/api.test.ts
import { afterAll, beforeAll, deploy, destroy, expect, test }
  from "alchemy/Test/Bun";
import * as HttpClient from "effect/unstable/http/HttpClient";
import Stack from "../alchemy.run.ts";

const stack = beforeAll(deploy(Stack, { stage: `pr-${Date.now()}` }));
afterAll.skipIf(!process.env.CI)(destroy(Stack));

test("PUT + GET round-trips through R2", Effect.gen(function* () {
  const { url } = yield* stack;
  const res = yield* HttpClient.get(`${url}/object/hello.txt`);
  expect(yield* res.text).toBe("hi!");
}));
CI · pr-1729TEST
$
No mocks
Your tests run against real R2, real DynamoDB, real Workers. What passes locally passes in prod.
Per-suite isolation
Stage names from PR or test ID. Two suites run the same tests in parallel without collision.
Effect-aware test runner
Vitest and Bun test wrapped with Effect support. Yield from any test, get typed errors at the assertion line.
ci/cd · preview branch per pr

Every PR is a preview. Auto-destroyed on merge.

Open a PR — alchemy deploys an isolated stage, comments the URL on the PR, and tears it down on merge. One workflow file. Every environment. Learn more →

  1. 1PR opened
  2. 2Deploy
  3. 3Comment
  4. 4Merged & destroyed
Open#147
Add image upload to /photos
feature/photo-uploadmain
Deploy previewqueued
ci · pr-147PR OPENED
$
# pull_request opened — STAGE=pr-147
# workflow queued…
.github/workflows/deploy.yml
# .github/workflows/deploy.yml
env:
  STAGE: ${{ github.event_name == 'pull_request'
            && format('pr-{0}', github.event.number)
            || (github.ref == 'refs/heads/main' && 'prod' || github.ref_name) }}
Stage from event
pr-{n} for PRs, prod for main, branch name otherwise — one expression, no scripting.
Safety check on cleanup
The cleanup job refuses to destroy prod. Even if you manage to trigger it on the wrong event.
Auth via GitHub OIDC
Trade GitHub identity for an IAM role at runtime. No long-lived AWS keys in your repo.
observability · otel + iac

Observability is just more infrastructure.

Effect already emits OpenTelemetry by default. Alchemy declares the exporter as a Layer — point it at Axiom, Datadog, CloudWatch, or any OTLP endpoint — and you ship the dashboard with the service. Alarms live next to the metrics they watch, in the same alchemy.run.ts. Learn more →

src/Api.ts
// Effect emits OpenTelemetry by default.
// Pick an exporter Layer; the Worker code never changes.
export default class Api extends Cloudflare.Worker<Api>()(
  "Api",
  Effect.gen(function* () {
    yield* Effect.logInfo("request received");
    yield* Metric.increment(requestsTotal);
    return { fetch: handler };
  }).pipe(
    Effect.provide(AxiomExporter),       // or CloudWatch, Datadog, OTLP …
  ),
) {}
Axiom
api.axiom.co
Datadog
trace.agent.datadoghq.com
CloudWatch
logs.aws.amazon.com
Any OTLP
collector:4318
alchemy.run.ts
// alchemy.run.ts — same program. operations included.
export const Dashboard = AWS.CloudWatch.Dashboard("ApiHealth", {
  widgets: [
    Widget.line({   title: "p99 latency",     metric: api.metrics.p99 }),
    Widget.line({   title: "requests / sec",  metric: api.metrics.rps }),
    Widget.number({ title: "5xx ratio",       metric: api.metrics.errorRate }),
  ],
});

export const P99Alarm = AWS.CloudWatch.Alarm("p99Latency", {
  metric: api.metrics.p99,
  threshold: 500,
  comparisonOperator: ">",
  evaluationPeriods: 5,
  alarmActions: [pagerDuty, slackWebhook],
});
hey —
alchemy is in alpha and not ready for production use (expect breaking changes). Come hang in our Discord to participate in the early stages of development.
Join Discord

Ship your cloud as one typed program.