Page 7 of 7
Reflection 2¶
How This Reflection Works
Your Facilitator will pull from these questions to lead a cohort-wide discussion. If your AI assistant is building during a Challenge and you have a free moment, these also make great team conversation starters while you wait.
What You Built¶
- What was the moment real data first showed up in your app? What did that feel like compared to any hardcoded data from Challenge 1?
- What's the feature you're most proud of right now? Is it something from the baseline, or did your team go in an unexpected direction?
- When an application moves from hardcoded sample data to real data sources, what kinds of things tend to break? Why does something that works perfectly with fake data often fail with real data? What does that tell you about the assumptions baked into a prototype?
What You Practiced¶
- Lesson 2 described the shift from building in chat to building in a development environment. What do you gain by moving to a real environment? What gets harder? Why is that tradeoff worth it?
- During Lesson 2, you wrote user stories to plan what you'd build. How well do you think pre-planned user stories hold up once you start actually building? When is it right to follow the plan, and when should you let the work redirect you?
- The "What's Real, What's Fake?" audit is designed to give you a roadmap for what to tackle next. Is that kind of structured inventory useful for prioritization, or would you naturally gravitate toward the most interesting problem regardless?
- AI can produce working code quickly, sometimes faster than you can verify it. What's the risk of moving faster than you can check? How do you know when AI says something is "done" that it actually meets the requirement you had in mind?
How You Worked¶
- Now that you're in a shared project, did your team organize differently than in Challenge 1? Did anyone work on different parts at the same time?
- In a chat tool, there's no undo; if something goes wrong, you start over. In a real development environment, you can save, undo, and roll back. How does the ability to undo change your willingness to experiment? What risks would you take if you knew you could always revert?
Looking Ahead¶
- Think about what you'd need to tell an AI coding assistant every time you started a new conversation: what the project is, who it's for, your design decisions. What would change if the tool already knew all of that? What kinds of repetitive context do you think teams waste the most time re-explaining?