What I’ve Learned Throughout the Years, Scaling a System

I’ve spent the last few years building a system that processes and delivers 5+ million physical letters per year – fully automated, GDPR-compliant, and based entirely on AWS.
It started as a side project. It became a multi-millon-revenue company. And scaling it taught me things I would have never learned from a book.


1. Scaling isn’t a feature – it’s a mindset shift

When you start out, almost anything works. Monoliths. Direct calls. One big database.
But as your system grows, success starts breaking your architecture.
At 100,000 documents per year, we could still get away with vertical scaling. At 1 million, it became obvious: the same patterns no longer apply.

Every stage of growth has its own bottlenecks – and you have to spot them before they burn you.

I began thinking in stages, isolating steps, and shifting from “catching errors” to designing for failure.
Like a factory: bad input gets pulled aside – flow continues.


2. Every failure must be isolated

Imagine your system as a pipeline with 100+ steps.
What happens if step 37 crashes? How you stay sane thinking about all these steps at once (Spoiler: you can’t)

In the early days, that meant the entire process stopped.
Now? Step 37 fails for that item only. The rest continues. It’s just like a manufacturing line: isolate bad parts, keep the flow moving, and the flow is getting faster and faster!

That one principle changed how I think about services:

  • Always know the high-level, be precise in the low-level and be quick in shifting your brain from thinking high and low, especially in the case of emergency (imagine someone throwing a tiny screw in your gearbox while you’re trying to win a race; you gotta be quick in navigating through the system to detect the problem, resolve it and keep going)
  • Every service will fail, how bad can it be and how fast can you recover?
  • You will not be able to avoid errors, how can you be resilient? Always know your worst case.
  • You will not be able to have a strategy for every possible error. Rather you have to become quick at solving ANY problem.
  • Queue everything. Decouple. Retry selectively. Have a kill-switch. Be able to isolate parts of the system. Be able to hit a full-stop and restart. Be able to change to tyres while staying on the race track.
  • Log and route failures without halting flow AND do it in a way that really helps debugging (every message you log should have a clear purpose)

Technical example: stack evolution

We had a PDF processor that was too slow under load. Initially it did multiple steps, all with Java. Why Java? Maybe opinionated but I like the PDFBox implementation.
Instead of trying to push everything requirement onto Java, we broke the process into:

  • Text extraction, best with Python + OCR
  • Layout analysis, best with Java + PDFBox
  • Template matching, best with NodeJS, no type-system
  • Rendering, again best with Java

Each step was optimized in multiple levels: language, system (FaaS or PaaS, etc.), hardware requirements, load, latency and scaling.
This reduced cold starts, gave observability per step, and let us scale horizontally with optimal cost-efficiency.


3. You can’t scale what you can’t observe

At 1M/year, we had to get serious about visibility.
Logs weren’t enough. Metrics weren’t enough.
We had to watch failure patterns and understand why something degraded under load.

Cold starts. Latency spikes. Retry storms.
These weren’t bugs – they were blind spots.

We added detailed CloudWatch metrics, end-to-end request IDs, and dashboards per pipeline stage.
Suddenly, we could see where load piled up, which queues were always full, and where we had throughput issues.

When we started scaling, the biggest surprise wasn’t load.
It was the long tail:

  • A rare bug in a PDF parser.
  • Hot partitioning in early stages of s3.
  • A misconfigured retry policy that blew up costs.
  • A Lambda cold start that doubled response times.
  • Even an error within S3, which was guaranteed by Amazon to NOT happen (false-positive should-be atomic locking confirmation on s3 objects; with this, our retry mechanism became invalid…cost me 30k$)

Plus: Monitoring isn’t about uptime.
It’s about understanding where time, cost, and trust are lost.


4. Eventually consistent isn’t a bug – it’s a strategy

We stopped aiming for “perfect data, instantly”.
Instead, we asked:

  • What really needs to be transactional?
  • What can be delayed, batched, or corrected later?

Answer:

  • Billing: must be consistent.
  • UI status: eventually consistent.
  • Rendering metadata? Cache and recheck later.

Every strong architecture is opinionated about consistency.
We learned to spend it where it matters – and skip it where it doesn’t.


5. Serverless doesn’t always scale cost-efficiently

When you’re small, FaaS is magic.
When you’re big, it can become an invoice.

We had Lambdas running 24/7 due to high throughput.
They were functionally correct, working since day 1 – but cost-wise? They became awful over time.
So we shifted to PaaS or containers where continuous load justified it.
Lesson: evolve also into other paradigms, and always challenge former decisions. It’s natural as you grow.


6. Scaling changes how you think about systems

At some point, I stopped writing “features” and started building flows.
I looked at my system like a logistics chain:

  • Where are the drop points?
  • What happens when an event gets stuck?
  • Can I retry it without side effects?
  • What’s the customer impact and can we recover from it if the shit hits the fan?

It’s not about functions anymore. It’s about behavior over time. „Keep an eye on the overall picture“.


Closing Thoughts

I didn’t learn this from blog posts.
I learned it by building, breaking, fixing, scaling, and repeating – over years.

If you’re in the middle of growing something, here’s my advice:

  • Don’t chase scale for its own sake.
  • Don’t fear refactoring.
  • Systems evolve. Be clear about what will become your next bottleneck and be prepared.
  • Put your system under pressure. Just as an engine, it will show you where it will break under load. Optimize that.

I’m still learning. Still breaking things. But that’s part of it. Worked good so far.


#hashtags

#systemdesign #scalability #softwareengineering #devops #aws #serverless #architecture #cloudcomputing #lessonslearned #buildinpublic

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert