Running a 20-Minute System Design Monologue – with ChatGPT

Today I ran a focused system design session with ChatGPT, simulating a real-world interview scenario. The objective: to articulate a complete system design in 20 uninterrupted minutes, (unfortunately) without any back-and-forth, questions, or validation during the session.

I wanted to mirror a common challenge in technical interviews, where candidates must prioritize clarity, coverage, and structure without knowing whether they’re hitting the right depth or focus points until it’s too late.


How I Approached the Session

Rather than starting with requirements gathering (bc. we talked about a system that does not yet exist and ChatGPT wouldn’t know either) or clarifying edge cases, I jumped straight into the design—intentionally skipping product-level back-and-forth.

I structured my thoughts to cover:

  • High-level architecture
  • User interactions (anonymous, authenticated)
  • Stateless frontend to LLM communication over WebSockets
  • Session handling and asynchronous processing
  • API gateway and scalable model-serving infrastructure
  • Queue-based and stream-based processing between components
  • Global distribution and regional data sharding
  • Separation of production and analytics databases
  • Storage strategies for user transcripts and metadata
  • Observations around caching, prioritization (free vs. paid tiers), and latency considerations

The design emphasized practical scalability over academic purity, and deliberately scoped out non-critical domains like billing for the purpose of focus.


Post-Session Evaluation

After finishing the 20-minute monologue, I asked ChatGPT for structured feedback across typical dimensions used in Staff-level system design interviews. The evaluation broke down as follows:

âś… What went well:

  • System clarity: Clean decomposition into user-facing, backend, and model-serving layers.
  • Data flow: Good async strategy between UI and model endpoints, including proper use of queues and potential migration to streams.
  • Scalability: Realistic approach to request distribution, memory-heavy model workloads, and global sharding.
  • Data separation: Thoughtful distinction between live production data and analytics pipelines—clean decoupling.
  • Tradeoffs: Clear articulation of when and why caching is ineffective, and how to prioritize model latency over ACID guarantees.

⚠️ What could be expanded:

  • Observability: No mention of logging, metrics, tracing, or monitoring workflows.
  • Security & compliance: No direct reference to GDPR, PII handling, or access control beyond basic authentication.
  • Model serving internals: While load-aware model invocation was covered, serving stack choices (TGI, Triton, etc.) were left unspecified → there’s still a lot to learn for me, since I’m new to the field of AI
  • Failure handling: Retry strategies were mentioned in passing, but SLAs, circuit breakers, and degradation behavior weren’t explored in detail

Importantly, many of these areas could have been addressed with follow-up prompts. I simply chose to prioritize architectural flow and practical decisions due to the time constraint.


Reflections on Interview Dynamics

One major takeaway: it’s not a weakness if the interviewer needs to ask questions. In fact, interviewers are trained to dig deeper, and doing so allows them to assess depth, not just breadth.

In a real interview, I would likely have asked:

“Would you mind walking me through your observability stack?”
“How do you ensure privacy compliance for international users?”
“What’s your failover strategy if an LLM node crashes mid-session?”

And I would have been ready to answer. But given no feedback window, the biggest challenge was prioritizing content without knowing what the interviewer values most.


Conclusion

Practicing system design this way—with ChatGPT as a neutral, non-interrupting sparring partner—offers a unique way to pressure-test both structure and pacing. It forces discipline around how much to cover in limited time, while still allowing a safe space to reflect and iterate afterward.

This session reinforced my architectural intuition, but also reminded me how valuable interviewer interaction really is—not just for clarifying ideas, but for surfacing expertise I might otherwise leave unsaid.

Level assessment after this session: Tracking toward Staff Engineer (L6) — with clear strengths in architecture and tradeoff reasoning, and opportunities to deepen on compliance, observability, and operational resilience.

ChatGPT continues to be a surprisingly effective mock interviewer for Staff-level system design—one that can reflect, critique, and scale with your thinking.

When the System Works but the Process Fails

Some of the hardest lessons in engineering have nothing to do with code.

After finishing a technically challenging Android project (a mobile remote support SDK that never saw the light of day due to business reasons), I moved on to a new project with a distributed team. The setup was simple on paper: our team handled the input side — collecting structured user input and sending it to a backend — while another team, working remotely, handled the response side via webhooks.

We were building two halves of the same system. One side sends. The other responds.

Trouble Without Error Messages

From early on, I noticed irregularities in the other team’s work — not in their intent, but in their execution. Basic Git practices were missing. Huge commits with full file diffs. Overwrites. Merge conflicts. No reviewable history. Every day, I spent hours cleaning up commit messes, just to keep the project functional.

I flagged it — politely, repeatedly. But nothing changed.

Eventually, we reached the final presentation.

What Happened

Our part worked. It sent clean, structured data to the backend — exactly as expected. The webhook system, which was supposed to generate the response, didn’t reply.

The system failed. Not because of bad architecture. Not because of bugs. But because one half never responded.

Since there was no visible output, some assumed our part had failed too.

That moment was hard. It’s tough to explain why your component works when the full system doesn’t. Especially when you’re the only one saying so.

What Made It Worse

This wasn’t about blaming the other team. They were under pressure, too. And remote collaboration across time zones and cultures isn’t easy.

But what hurt more was something else: not being heard.

Even when I raised issues clearly. Even when I showed commit logs. Even when I explained in detail what was working, what wasn’t, and why. People nodded. But nothing changed.

Some of the people leading the project were technically accomplished — holding PhDs. But there was a disconnect between knowledge and action. And eventually, I realized something quietly unsettling: my explanations weren’t landing — not because they were wrong, but because no one really wanted to deal with them.

The Quiet Realization

There was one colleague I respected a lot — a high performer. He noticed. We shared many quiet moments, seeing the same things and not always being able to fix them.

He didn’t say much. But he understood. That was enough.

What I Learned

I joined the company to learn. And I did — but not in the way I expected.

I learned what happens when ownership is fragmented, but expectations are not.
I learned what it feels like to carry responsibility without authority.
And I learned to trust my technical judgment, even when others don’t.

It was one of the first times I realized: maybe I wasn’t falling behind. Maybe I was already further ahead than I thought — just in the wrong place to see it.

Closing Thought

Not every lesson in tech is about systems, APIs, or architecture. Sometimes, it’s about listening. Or not being listened to. Or immature management. About speaking up, even when nothing changes.

That project didn’t ship. But the awareness it gave me stuck.

And it quietly shaped the engineer and leader I became after.