Monitoring CloudFormation Change Sets with Bash and AWS CLI

We are managing our infrastructure as code with AWS CloudFormation. By 2025, we have around 350 stacks. Once we trigger the changeset creation over our Bitbucket Pipelines, we sometimes miss executing one. After a day of many deployments, we like to keep track of pending change sets so we don’t miss one.

Why Bash?

We value simplicity and transparency in our tooling. That’s why we opted to script our own lightweight Bash solution using the AWS CLI and jq, instead of introducing yet another dashboard or monitoring dependency. It’s not too easy to learn but dependency free and extremely powerful especially together with clis. Also, working in a terminal looks cool af 🙂

🛠️ What We Monitor

  • Pending Change Sets: Change sets that have been created but not yet executed
  • Executed Change Sets: Recently applied infrastructure changes

🧾 The Script

Here’s the script, the only dep you need is the aws cli:

#!/bin/bash

STACKS=$(aws cloudformation list-stacks \
--query "StackSummaries[?StackStatus=='CREATE_COMPLETE'||StackStatus=='UPDATE_COMPLETE'].StackName" \
--output text)

for STACK in $STACKS; do
echo "🔍 Stack: $STACK"

CHANGESETS=$(aws cloudformation list-change-sets --stack-name "$STACK" --output json)

echo "$CHANGESETS" | jq -r '.Summaries[] | select(.ExecutionStatus == "AVAILABLE") | "🟡 Pending: \(.ChangeSetName) (\(.Status)) - \(.CreationTime)"'
echo "$CHANGESETS" | jq -r '.Summaries[] | select(.ExecutionStatus == "EXECUTE_COMPLETE") | "✅ Executed: \(.ChangeSetName) - \(.CreationTime)"'
echo ""
done

🧪 Sample Output

🔍 Stack: some-stack
🟡 Pending: feature-xy (CREATE_COMPLETE) - 2025-04-22T08:15:34Z
✅ Executed: hotfix-rollback-prod - 2025-04-20T17:45:00Z

🔍 Stack: some-other-stack
✅ Executed: new-endpoint-deployment - 2025-04-21T12:30:12Z

You can use it e.g. to add Slack alerts if pending changes linger too long or to quickly scan for unexecuted infra changes before/after big deploys.

🛠️ Minimal Tools. Maximum Control.

We believe in minimalism in DevOps. Not everything needs a dashboard—especially if Bash gets the job done elegantly. This script helps us stay in control of our CloudFormation lifecycle while keeping things simple for all developers, regardless of stack.

🏅 Bonus: The script we actually use

This is the script we actually use. It is much more powerful, but more complex to understand. It does add a filter for stack name filtering and you can specify the hours within which the most recent executions should be checked.

#!/bin/bash

# --- Configuration
HOURS_AGO=${HOURS_AGO:-24} # Default to 24 hours if not set externally
FILTER="$1" # Optional stack name substring filter

# --- Calculate Time Window
if date -v -"${HOURS_AGO}"H &>/dev/null; then
# macOS
SINCE_EPOCH=$(date -v -"${HOURS_AGO}"H +%s)
SINCE_HUMAN=$(date -v -"${HOURS_AGO}"H -u +"%Y-%m-%d %H:%M:%S UTC")
else
# Linux
SINCE_EPOCH=$(date -d "$HOURS_AGO hours ago" +%s)
SINCE_HUMAN=$(date -d "$HOURS_AGO hours ago" -u +"%Y-%m-%d %H:%M:%S UTC")
fi

echo "📅 Showing CloudFormation activity in the last $HOURS_AGO hours (since $SINCE_HUMAN)"
[[ -n "$FILTER" ]] && echo "🔍 Filtering stacks by substring: '$FILTER'"
echo ""

# --- Get All Relevant Stacks
STACKS=$(aws cloudformation list-stacks \
--query "StackSummaries[?StackStatus=='CREATE_COMPLETE'||StackStatus=='UPDATE_COMPLETE'].StackName" \
--output text)

# --- Loop Through Stacks
for STACK in $STACKS; do
if [[ -n "$FILTER" && "$STACK" != *"$FILTER"* ]]; then
continue
fi

echo "🔍 Stack: $STACK"

# --- Pending Change Sets
CHANGESETS=$(aws cloudformation list-change-sets --stack-name "$STACK" --output json)
echo "$CHANGESETS" | jq -r '.Summaries[] | select(.ExecutionStatus == "AVAILABLE") | "🟡 Pending: \(.ChangeSetName) (\(.Status)) - \(.CreationTime)"'

# --- Recent Executions
EXECUTIONS=$(aws cloudformation describe-stack-events --stack-name "$STACK" --output json |
jq -r --argjson since "$SINCE_EPOCH" '
.StackEvents[]
| .Timestamp as $ts
| ($ts | sub("\\.[0-9]+Z$"; "Z") | fromdateiso8601) as $event_time
| select((.ResourceStatus == "UPDATE_COMPLETE" or .ResourceStatus == "CREATE_COMPLETE") and $event_time > $since)
| "🕒 \($ts): \(.LogicalResourceId) - \(.ResourceStatus)"')

if [[ -n "$EXECUTIONS" ]]; then
echo "✅ Executions in last $HOURS_AGO hours:"
echo "$EXECUTIONS"
fi

echo ""
done

📌 Sample call & output

[10:22:58] giehlman:$ HOURS_AGO=36 ./get-pending-cs.sh billing
📅 Showing CloudFormation activity in the last 36 hours (since 2025-04-21 20:24:01 UTC)
🔍 Filtering stacks by substring: 'billing'

🔍 Stack: billing-api-XXX-1
🟡 Pending: cs-52f9278 (CREATE_COMPLETE) - 2025-03-06T13:04:25.805Z
✅ Executions in last 36 hours:
✅ 2025-04-22T15:13:03.175Z: billing-api-XXX-1 - UPDATE_COMPLETE
✅ 2025-04-22T15:12:59.728Z: ServiceDefinition - UPDATE_COMPLETE
✅ 2025-04-22T15:09:35.141Z: TaskDefinition - UPDATE_COMPLETE

🔍 Stack: billing-api-XXX-2

I Advocated for Containers in 2019 — They Rolled It Out in 2022

Back in 2018, I took on a freelance role at a large organization. Together with one other engineer, we were responsible for keeping dozens of short-lived websites up and running — all the underlying infrastructure, coordination with development agencies, and the bridge between internal IT and external infrastructure providers.

The websites themselves were built by agencies. Our job was everything around it: deployments, environments, scaling, stability, and being ready when something broke.

The Setup

At the time, the system ran on a traditional LAMP stack — Apache, PHP, and large autoscaling nodes. Multiple website vhosts lived side-by-side on the same physical servers. If one crashed, it risked taking down others. It worked, but it wasn’t isolated, nor flexible.

I thought there is room for improvement.

I started suggesting: „What if each site ran in its own container? Small, isolated, easily restarted, easily scaled. And what if we transitioned to a Kubernetes-based setup?“

The Reaction

The idea wasn’t rejected — but it was too early.

We were at the bottom of the chain. Most architectural decisions came from centralized IT, and change moved slowly. Some of the stakeholders preferred the familiar stack. Others simply didn’t see the need. And frankly, the incentives to modernize weren’t strong at the time.

So we stayed with Apache. We stayed with full-node scaling. And we kept handling incidents the old way: reviewing logs, identifying which site caused issues, and reacting.

Still, I documented the containerization proposal. I outlined why it mattered — not just for scalability, but for reliability, security, and maintainability.

Then I let it go.

What Changed

By 2022, years after I had left the role full-time, the company began rolling out containers. Kubernetes. CI/CD pipelines. Site isolation.

I wasn’t involved in the rollout. But I smiled when I saw it happen.

Not because it proved me right.
But because it reminded me:
Sometimes, technical clarity doesn’t need immediate confirmation. It just needs time.

What I Took From It

This wasn’t a glamorous project. There were no shiny products, no launch announcements. I wasn’t there to force change. I was there to keep things running, improve what I could, and plant seeds when I had the chance. Some of those seeds took years to grow. But they did.

Final Thought

You don’t always need to win the argument to know you’re on the right path.

Sometimes, your best work is the kind that takes years to be seen — and still holds up when it finally is.