Open
Description
- Build a Simulated Interview Tool (Prototype): This is a more experimental sub-project aimed at interactive practice:
- Concept: A user can choose a system design problem (say “Design Instagram”) from a list, and the tool will guide them through an interview. It would first present the problem, then wait for the user to say how they’d clarify requirements, then perhaps show ideal clarifications, then ask for a high-level design, etc., mimicking an interviewer.
- Implementation approaches:
- A simple scripted dialogue using predefined answers: We could create a text-based script (even a markdown or JSON file) for each case study, containing prompts and expected points. The tool could be a command-line program or a web chatbot that reveals model answers step by step. This would not require AI, just a structured script (like a decision tree or linear workflow).
- A more advanced AI-driven chatbot: Use an LLM (like GPT-4 via API) that has been primed with system design knowledge (maybe provided the Primer content or using vector search). The chatbot can play the role of interviewer: it can answer questions the user asks (“Do we have a latency requirement?” etc.) and give hints if the user is stuck. This would provide a dynamic experience. We’d need to carefully design the prompt and possibly fine-tune with example dialogues. Because this can be complex, it might start as a community collaboration or separate tool in the repo (maybe under
/tools/simulated_interview
with instructions to use an OpenAI API key).
- Prototype: Start with one scenario (perhaps “URL Shortener” since it’s a classic). Write a scripted Q&A outline: e.g.
- Interviewer: “What are your assumptions and requirements?” (User types their answer)
- Interviewer: “Okay, some key requirements to note are X, Y. Next, how would you design the high-level architecture?” …
Use this to test feasibility and gather feedback from a few users.
- If using an AI approach, ensure the user must supply their own API key (for cost and privacy reasons) – we can provide a small Python script or Jupyter notebook where they plug in their key and run the chat.
- Integrate Practice Tool into Repo: Once a stable method is ready, integrate it:
- Provide instructions in the README or a section of the Primer on “Interactive Practice”. If it’s a web-based tool (maybe hosted on GitHub Pages or an external site), link to it. If it’s a script, include usage instructions (perhaps even a Docker container to run it easily).
- Include a couple of example runs in documentation so users know what to expect. For instance, show a snippet of an interview dialogue or a screenshot if it’s web-based.
- Encourage contributions of new scenarios or improvements to the script/AI prompt. Over time, the library of simulated interviews can grow (covering all major case studies in the Primer).
- Quality Assurance: Ensure that the converted Mermaid diagrams accurately reflect the originals. Have technical reviewers or the original authors verify them. For the interview tool, caution about its advice – if AI-driven, double-check it doesn’t produce incorrect design suggestions. Label it as beta and encourage users to still refer to the written solutions.
Metadata
Metadata
Assignees
Labels
No labels