← Back to Blog RSS

Engineering Log: Computing dependency cones for Terraform resources

Engineering Log Terraform Stategraph Infrastructure

When you want to change a single Terraform resource, you shouldn't need to deploy the entire configuration file. You need that resource and its dependencies - nothing more. But extracting that subset manually means tracing through references, chasing down data sources, and hoping you didn't miss anything. That's error-prone and tedious. We built a tool that does it automatically.

engineering-log-terraform-dependency-cones.tldr
$ cat engineering-log-terraform-dependency-cones.tldr
• Built a tool that computes the dependency cone of any Terraform resource
• Extracts a minimal subset file containing only the necessary dependencies
• Validates correctness by running terraform plan on the extracted subset
• Next: support for multi-file modules and broader testing

Dependency isolation for safer changes

Terraform configurations grow. What starts as a single file with a handful of resources becomes hundreds of resources across VPCs, subnets, security groups, IAM roles, Lambda functions, and all the glue that ties them together. When you need to change one resource in that sprawl - say, updating a Lambda function's environment variables - you don't actually need to interact with the entire configuration. You need that Lambda function, the VPC it's attached to, the security groups, the IAM role, and whatever else it depends on. Everything else is irrelevant to the change.

The problem is that Terraform doesn't give you a clean way to extract that subset. You can manually copy resources into a new file, chase down every reference, and hope you got all the dependencies. But that's brittle. Miss a data source or a subnet dependency and your plan either fails or, worse, tries to recreate resources that already exist.

So we built a tool that computes the dependency cone automatically. You give it a resource label, it walks the dependency graph, collects everything that resource transitively depends on, and outputs a minimal Terraform file containing just those resources. No manual tracing. No guessing. Just the subset you need.

Here's the demo.

How dependency extraction works

The tool is a subcommand that takes a resource label and a Terraform file as input. Under the hood, it parses the Terraform configuration, builds a dependency graph, and performs a graph traversal starting from the target resource.

$ stategraph deps
$ stategraph deps aws_lambda_function.my_lambda main.tf
→ Parsing Terraform configuration...
→ Building dependency graph...
→ Computing dependency cone for aws_lambda_function.my_lambda
→ Found 8 dependent resources
✓ Extracted subset written to output.tf

The output is a valid Terraform file containing only the resources in the dependency cone. The tool normalizes the formatting slightly - map keys get quoted, blocks are consistently formatted - but the semantics are identical to the original.

Validation through Terraform

The real test of correctness isn't whether the tool runs without crashing. It's whether Terraform can actually plan the extracted subset. If the dependency analysis is incomplete or incorrect, terraform plan will either fail with missing references or attempt to recreate existing resources. A successful plan that matches the original configuration proves the extraction was lossless.

Testing with real configurations

The initial test uses a Terraform file with AWS resources deployed to LocalStack, a local AWS emulator. The configuration includes VPCs, subnets, internet gateways, route tables, and Lambda functions - enough complexity to validate that the dependency traversal handles nested references correctly.

$ terraform plan
$ terraform plan -out=plan.tfplan
Terraform will perform the following actions:
# aws_lambda_function.my_lambda will be created
+ resource "aws_lambda_function" "my_lambda" {
+ function_name = "my_lambda"
+ ...
}
Plan: 8 to add, 0 to change, 0 to destroy.

The plan shows exactly the resources we expected: the Lambda function and its dependencies. No extraneous resources, no missing references, no attempts to recreate resources that should already exist. That's the validation we need.

What comes next

The current implementation works for single-file configurations, but real Terraform projects span multiple files and reference external modules. The next step is extending the tool to handle module boundaries - tracking dependencies across files and resolving module references correctly.

We also need to test this on larger, more complex configurations. The test file has a few dozen resources, which is enough to validate the core algorithm, but production Terraform configurations can have hundreds or thousands of resources. Scaling the dependency traversal and ensuring the output remains correct at that scale is the next engineering challenge.

Building blocks for smarter workflows

This tool isn't just useful in isolation. It's a building block for more intelligent Terraform workflows. Once you can extract dependency cones, you can run targeted tests on subsets of your infrastructure, parallelize deployments by identifying independent resource groups, or analyze blast radius for proposed changes. Dependency analysis unlocks all of that.

Follow along as we build Stategraph

This is part of our ongoing engineering log series where we share progress, technical decisions, and the challenges we hit while building Stategraph. If you want to follow the journey or get involved as a design partner, subscribe for updates.