Skip to content
Page 5 of 7

Challenge 4: Go Live

Recap:

In Challenge 3, your project context file changed everything. AI knew your project from the first word of every conversation, and you used that speed to build ambitious features: a new data source, condition-specific recommendations, and whatever your team chose to push toward. Your app went from displaying real data to being genuinely useful.

Then in Lesson 4, you zoomed out. You learned what deployment means: moving your app from your private workspace to a live URL that anyone can visit. You saw that your Save & Sync habit has been building toward this moment all along, that the deployment pipeline is already set up, and that going live is one prompt away. You ran the pre-flight check and made a game plan for this final sprint.

You also learned some important truths about AI-built software: the two-week cliff (things break when they interact), the validation gap (AI says it's done before you've verified it works), and why authoritative data sources matter more than AI-generated explanations. Those aren't reasons to hold back. They're reasons to ship thoughtfully.

This is it. Your final sprint. Time to ship.

The Challenge

Deploy your Expedition Safety Brief to a live URL. Then polish, fix, and add that final touch. By the end of this challenge, your Expedition Safety Brief should be live: a real application, accessible to anyone with a browser, built by a team that had never written code before today.

This is the challenge where your workshop project becomes a real product. Deploy first, then iterate.

What to Build

Items are listed in priority order. If time is tight, focus on the items near the top first.

  • Your Expedition Safety Brief is deployed to a live URL: tell your AI coding assistant to deploy the project and get back a working URL that anyone can visit from any device
  • Fix any issues from the pre-flight check: if something was broken or outdated when you checked in Lesson 4, fix it now
  • At least one final improvement: polish an existing feature, add something new, or improve the experience based on what you wish the brief had
  • Your live version reflects your latest work: save and sync so the live URL shows everything you've built, not an earlier version

These are options for teams that finish the baseline capabilities. Your team can also define your own stretch goals based on what interests you.

  • Mobile-friendly: test your live URL on a phone and make sure it looks good on a small screen (tell your AI coding assistant to make the layout responsive if it isn't already)
  • About page: add a section that explains what the Expedition Safety Brief is, who built it, and where the data comes from. Give your creation a proper introduction
  • "Last updated" timestamp: show when the data was last refreshed so users know they're looking at current conditions, not stale information
  • Share it: share the live URL with a friend or family member who hikes or spends time outdoors and ask what they think. That's the ultimate test of whether what you built is useful

Final Growth Check-in

There is a final growth check-in on the next page. Make sure to navigate there and fill it out before the final reflection begins.

Tips

  • Deploy first, improve second. Don't try to perfect everything before you deploy. Get the URL, confirm it works, then use the remaining time to polish. You can redeploy as many times as you want. Save and sync your work, and the pipeline updates the live version automatically.
  • Use the game plan you made in Lesson 4. Your team already discussed what to tackle first. Stick to the plan, or adjust it now that you're in the sprint.
  • Test the live URL, not just the workspace preview. Your workspace preview and the deployed version should look the same, but always check the real URL. Open it in a new browser window or on a teammate's phone. Check the things that matter most: Does the NPS alert status show today's real data? Do the pages and navigation work? Does the condition-specific advice match current alert levels? These are your acceptance criteria from earlier challenges; they should still pass on the live version.
  • If something goes wrong, don't panic. If deployment fails, ask your AI assistant: "The deployment didn't work. Can you investigate why and fix it?" Your AI assistant can see what the pipeline flagged and make the necessary changes. If the live URL looks different from your workspace preview, save and sync again to trigger a fresh pipeline run. If a feature that worked in your workspace is broken on the live URL, save and sync to make sure your latest code is pushed.
  • Save and sync after each change. This is the same habit you've been building. Make a change, verify it in the workspace, then save and sync. The pipeline checks your code and updates the live version automatically. If the pipeline flags a problem, your AI assistant can investigate and fix it.