Today I ran a focused system design session with ChatGPT, simulating a real-world interview scenario. The objective: to articulate a complete system design in 20 uninterrupted minutes, (unfortunately) without any back-and-forth, questions, or validation during the session.
I wanted to mirror a common challenge in technical interviews, where candidates must prioritize clarity, coverage, and structure without knowing whether they’re hitting the right depth or focus points until it’s too late.
How I Approached the Session
Rather than starting with requirements gathering (bc. we talked about a system that does not yet exist and ChatGPT wouldn’t know either) or clarifying edge cases, I jumped straight into the design—intentionally skipping product-level back-and-forth.
I structured my thoughts to cover:
- High-level architecture
- User interactions (anonymous, authenticated)
- Stateless frontend to LLM communication over WebSockets
- Session handling and asynchronous processing
- API gateway and scalable model-serving infrastructure
- Queue-based and stream-based processing between components
- Global distribution and regional data sharding
- Separation of production and analytics databases
- Storage strategies for user transcripts and metadata
- Observations around caching, prioritization (free vs. paid tiers), and latency considerations
The design emphasized practical scalability over academic purity, and deliberately scoped out non-critical domains like billing for the purpose of focus.
Post-Session Evaluation
After finishing the 20-minute monologue, I asked ChatGPT for structured feedback across typical dimensions used in Staff-level system design interviews. The evaluation broke down as follows:
✅ What went well:
- System clarity: Clean decomposition into user-facing, backend, and model-serving layers.
- Data flow: Good async strategy between UI and model endpoints, including proper use of queues and potential migration to streams.
- Scalability: Realistic approach to request distribution, memory-heavy model workloads, and global sharding.
- Data separation: Thoughtful distinction between live production data and analytics pipelines—clean decoupling.
- Tradeoffs: Clear articulation of when and why caching is ineffective, and how to prioritize model latency over ACID guarantees.
⚠️ What could be expanded:
- Observability: No mention of logging, metrics, tracing, or monitoring workflows.
- Security & compliance: No direct reference to GDPR, PII handling, or access control beyond basic authentication.
- Model serving internals: While load-aware model invocation was covered, serving stack choices (TGI, Triton, etc.) were left unspecified → there’s still a lot to learn for me, since I’m new to the field of AI
- Failure handling: Retry strategies were mentioned in passing, but SLAs, circuit breakers, and degradation behavior weren’t explored in detail
Importantly, many of these areas could have been addressed with follow-up prompts. I simply chose to prioritize architectural flow and practical decisions due to the time constraint.
Reflections on Interview Dynamics
One major takeaway: it’s not a weakness if the interviewer needs to ask questions. In fact, interviewers are trained to dig deeper, and doing so allows them to assess depth, not just breadth.
In a real interview, I would likely have asked:
“Would you mind walking me through your observability stack?”
“How do you ensure privacy compliance for international users?”
“What’s your failover strategy if an LLM node crashes mid-session?”
And I would have been ready to answer. But given no feedback window, the biggest challenge was prioritizing content without knowing what the interviewer values most.
Conclusion
Practicing system design this way—with ChatGPT as a neutral, non-interrupting sparring partner—offers a unique way to pressure-test both structure and pacing. It forces discipline around how much to cover in limited time, while still allowing a safe space to reflect and iterate afterward.
This session reinforced my architectural intuition, but also reminded me how valuable interviewer interaction really is—not just for clarifying ideas, but for surfacing expertise I might otherwise leave unsaid.
Level assessment after this session: Tracking toward Staff Engineer (L6) — with clear strengths in architecture and tradeoff reasoning, and opportunities to deepen on compliance, observability, and operational resilience.
ChatGPT continues to be a surprisingly effective mock interviewer for Staff-level system design—one that can reflect, critique, and scale with your thinking.