Skip to content

Resource Lifecycle

Every Resource goes through the same lifecycle: plan → create → update → (replace) → delete. This page walks through each step, what triggers it, and how alchemy recovers when things go wrong. For the CLI flags that drive these operations, see the CLI reference.

new changed breaking removed unchanged Plan Create Update Replace Delete Noop

Each transition is implemented by the resource’s Provider. The same engine drives alchemy deploy, alchemy destroy, and alchemy dev.

When you run alchemy deploy, alchemy first plans the change. It compares the desired state (what your code declares) against the last persisted state and classifies each resource:

  • Create (+) — declared in code, not in state
  • Update (~) — declared and persisted, but properties differ
  • Replace (±) — change requires destroy-and-recreate
  • Delete (-) — persisted but no longer declared
  • No-op () — unchanged
Plan: 1 to create, 1 to update

+ Queue (AWS.SQS.Queue)
~ Worker (Cloudflare.Worker)
 Bucket (Cloudflare.R2Bucket)

The classification comes from each provider’s diff function. See Provider › diff for how providers decide between in-place updates and replacements.

Use alchemy plan (or alchemy deploy --dry-run) to see the plan without applying it.

When a resource doesn’t exist in state, alchemy calls provider.create. Creates are idempotent: physical names are deterministic from stack/stage/logical-id, so retrying a failed create finds the existing resource instead of duplicating it.

Resources with no dependencies create in parallel. Resources that depend on others wait for upstream Outputs to resolve.

When properties change but the resource doesn’t need to be replaced, alchemy calls provider.update. The provider receives both the old and new props and applies the diff in-place.

A second pass — convergence — re-runs update for any resource whose inputs changed because an upstream output changed mid-deploy.

Some property changes can’t be applied in place — for example, changing a DynamoDB table’s partition key. The provider’s diff returns { action: "replace" }, and alchemy:

  1. Creates a new resource with a new instance ID
  2. Updates downstream resources to reference the new resource
  3. Deletes the old resource

Because new and old coexist briefly, dependents get a clean cutover without downtime.

provider.delete is called when a resource disappears from your code, when a replacement supersedes it, or when you run alchemy destroy. Like create, delete must be idempotent: deleting an already-gone resource is a success, not an error.

alchemy destroy is just a plan where every persisted resource is marked for deletion. Resources are removed in reverse dependency order — dependents go first.

Plan: 2 to delete

- Worker (Cloudflare.Worker)
- Bucket (Cloudflare.R2Bucket)

Proceed?
◉ Yes ○ No
 Worker (Cloudflare.Worker) deleted
 Bucket (Cloudflare.R2Bucket) deleted

State persistence can fail after the cloud operation succeeds — the network drops between “bucket created” and “state saved”. Alchemy handles this by requiring create and delete to be safe to retry:

  • Create: deterministic physical names mean a retry finds the existing resource instead of creating a duplicate.
  • Delete: a missing resource is treated as already deleted.
  • Read: providers can implement read so alchemy can recover state from the live cloud when persistence fails partway.
  • Retryable errors (eventual consistency, dependency races) are retried automatically with backoff.
  • Non-retryable errors (validation, authorization) fail immediately and surface in the plan output.
  • Partial failures are safe to re-run thanks to idempotency.

The same engine powers all of these commands:

CommandWhat it does
alchemy planRun plan, print diff, exit
alchemy deployPlan, prompt for approval, apply
alchemy destroyPlan with everything marked deleted, apply
alchemy devPlan + apply continuously on file changes

See the CLI reference for the full set of flags (--yes, --force, --dry-run, --stage, --profile, …).