Build a 'Micro' Dining App in a Weekend with Firebase and LLMs
Build a micro dining app in 48–72 hours with Firestore, Firebase Auth, and an LLM—step-by-step starter kit for non-developers.
Make group meals effortless: build a "micro" dining app in a weekend
Pain point: you and your friends waste 20+ minutes deciding where to eat every time. Imagine a tiny, private web app that recommends restaurants, collects quick votes, and closes the loop — built in 48–72 hours, without hiring a dev. This guide shows exactly how to do it using Firestore, Firebase Authentication, and an LLM (ChatGPT or Claude) as the recommendation engine.
Why build a micro app in 2026?
Micro apps — short-lived, focused, personal or small-team apps — took off in the mid‑2020s as generative AI and low-friction cloud platforms matured. Late 2025 and early 2026 brought tools like desktop AI assistants (Anthropic's Cowork), and easier LLM integration (Anthropic's Cowork preview, expanded OpenAI deploy options), making it realistic for non-developers to assemble production-quality prototypes in hours, not weeks.
“I built a dining app in a week” — an increasingly common story. Micro apps are practical for social workflows, hacks, and internal automations.
What you'll ship in 48–72 hours (minimal viable micro app)
- Private web app hosted on Firebase Hosting
- User sign-in via Firebase Authentication (email link or social)
- Rooms for events ("Friday dinner") saved in Firestore
- LLM-based recommendations generated from room preferences
- Simple voting + final decision flow
- Server-side Functions for LLM calls (secure API keys)
Starter architecture (simple, reliable)
Client (Web) --auth--> Firebase Auth
Client --reads/writes--> Firestore (rooms, users, options, votes)
Client --trigger--> Cloud Function (recommendation) --calls--> LLM API
Cloud Function --writes--> Firestore (cached recommendations)
Firebase Hosting serves the client
Why this pattern? It keeps secrets server-side (LLM API keys), leverages Firestore realtime listeners for live voting, and uses Functions for deterministic prompts and caching.
Firestore data model (compact)
rooms/{roomId}
name: "Friday Pho"
hostId: "uid_abc"
createdAt: timestamp
prefs: {cuisine: "asian", price: "mid", distance: 5}
rooms/{roomId}/options/{optionId}
name, address, meta (yelp/google id)
rooms/{roomId}/votes/{userId}
optionId, weight
rooms/{roomId}/recommendations/{runId}
prompt, result, createdAt, source: "openai|anthropic"
48–72 hour plan: step-by-step
Day 0 — Prep (1–2 hours)
- Create a Firebase project at console.firebase.google.com
- Enable Firestore (start in test mode for rapid iteration), Authentication (Email Link or Google), and Functions.
- Pick an LLM provider (OpenAI or Anthropic). Create API keys and store them as encrypted environment variables in Functions (firebase functions:config:set llm.key="..." llm.provider="openai").
- Install Firebase CLI and initialize a web app boilerplate: firebase init hosting firestore functions.
Day 1 — Core features (6–8 hours)
- Implement sign-in with Firebase Authentication (use FirebaseUI to avoid UI work).
- Create the minimal Firestore schema: rooms, options, votes.
- Build a page to create a room and invite teammates (share a room URL or code).
- Implement voting UI using Firestore realtime listeners to show live counts.
- Write a Cloud Function endpoint /recommend that reads the room prefs and options and calls LLM for ranked recommendations.
Day 2 — Polish and resilience (4–8 hours)
- Cache recommendation results in Firestore to avoid repeated LLM billing.
- Add Firestore security rules to lock rooms to authenticated members; require host for destructive actions.
- Improve prompts and UX: show explanation text from the LLM ("Why we picked X").
- Deploy to Firebase Hosting and test with real users.
Optional Day 3 — Extra nice-to-haves
- Add App Check to prevent client abuse
- Enable function logging + Firebase Performance monitoring
- Add sharing via SMS or short QR codes
Key code snippets (copy-paste friendly)
Initialize Firebase client (web)
import { initializeApp } from 'firebase/app'
import { getAuth, signInWithEmailLink } from 'firebase/auth'
import { getFirestore } from 'firebase/firestore'
const firebaseConfig = { /* from console */ }
const app = initializeApp(firebaseConfig)
const auth = getAuth(app)
const db = getFirestore(app)
Cloud Function: call LLM and cache
const functions = require('firebase-functions')
const admin = require('firebase-admin')
const fetch = require('node-fetch')
admin.initializeApp()
const db = admin.firestore()
exports.recommend = functions.https.onCall(async (data, context) => {
if (!context.auth) throw new functions.https.HttpsError('unauthenticated')
const { roomId } = data
if (!roomId) throw new functions.https.HttpsError('invalid-argument')
const roomRef = db.collection('rooms').doc(roomId)
const room = (await roomRef.get()).data()
// Simple cache: one recommendation per minute
const recsRef = roomRef.collection('recommendations').orderBy('createdAt','desc').limit(1)
const recSnap = await recsRef.get()
if (!recSnap.empty && (Date.now() - recSnap.docs[0].data().createdAt.toMillis()) < 60_000) {
return recSnap.docs[0].data()
}
const prompt = buildPrompt(room)
const llmResp = await callLLM(prompt)
const saved = {
prompt, result: llmResp, createdAt: admin.firestore.FieldValue.serverTimestamp(), source: process.env.LLM_PROVIDER || 'openai'
}
await roomRef.collection('recommendations').add(saved)
return saved
})
async function callLLM(prompt) {
const key = functions.config().llm.key
// Example: OpenAI API call (simplified)
const res = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: { 'Authorization': `Bearer ${key}`, 'Content-Type': 'application/json' },
body: JSON.stringify({ model: 'gpt-4o-mini', messages: [{role:'user', content: prompt}], max_tokens: 400 })
})
const json = await res.json()
return json.choices?.[0]?.message?.content || ''
}
Simple prompt template (effective & concise)
System: You are a friendly bot that recommends restaurants.
User: Given these preferences: {cuisine}, price {price}, max distance {distance} km, and these candidate places:
{list options with name, tags, distance}
Return a ranked list of top 3 with 1-sentence reasons and a confidence score (0-1).
Example LLM output:
1) PhoHouse — best fit for 'asian' + casual mid-price. Confidence 0.92
Reason: Highly rated pho, within 3km, quick seating for groups.
2) BunBowl — 0.81 — great vegetarian options and team-friendly seating.
3) NoodleBar — 0.77 — slightly pricier but excellent ambiance.
Firestore Security Rules (starter)
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /rooms/{roomId} {
allow read: if request.auth != null && isMember(roomId);
allow create: if request.auth != null;
allow update, delete: if request.auth != null && resource.data.hostId == request.auth.uid;
}
match /rooms/{roomId}/** { allow read, write: if request.auth != null && isMember(roomId); }
}
}
function isMember(roomId) {
// simple pattern: a 'members' subcollection or logic to check membership
return true; // tighten for production
}
Practical tips for non-developers
- Use FirebaseUI to skip auth UX work — it wires up providers and email link flows for you.
- Leverage templates: Start with a simple Firebase web starter and paste these snippets.
- Prompt-first design: Mock the prompt and desired output in ChatGPT or Claude to refine the conversation before coding.
- Iterate fast: keep Firestore in test mode initially, then tighten rules before inviting others.
Costs and scaling — keep your weekend budget in check
- Firestore reads/writes: Optimize by batching writes (store votes as a map or use incremental counters) and using real-time listeners responsibly.
- Functions: Cold starts add latency; use lightweight runtimes (Node 18+) and keep functions small.
- LLM costs: Cache recommendations. Trim tokens in prompts. Use cheaper models for drafts and higher-tier models for final explanations.
Reliability, observability & testing
- Add Firebase Performance Monitoring to track real user latency.
- Log LLM calls and errors in Functions; set alerts for function failure rates.
- Test with real groups: invite 3–6 friends and iterate on the prompt and UX.
Security & privacy (non-negotiables)
- Keep LLM keys server-side in Functions environment variables.
- Use App Check to prevent client spoofing if you plan to open invitations publicly.
- Consider data minimization: do not send user PII to the LLM unless necessary. Use hashed user IDs and only pass aggregated preferences.
- Document who has access — privacy matters; be explicit when inviting others.
Advanced patterns (if you have extra hours)
- LLM fallback chain: Call a cheaper model first and escalate when confidence is low.
- Multi-LLM experiments: Try ChatGPT and Claude and store comparative outputs to analyze quality. See notes on multi-LLM experiments.
- Embed context: enrich prompts with public reviews or menu snippets (watch for TOS and PII).
- Offline support: enable local caching of votes and options using Firestore's offline persistence for better UX in spotty networks.
Common pitfalls and how to avoid them
- Too many LLM calls: cache, debounce UI triggers, and use a function to centralize calls.
- Loose rules: avoid keeping public writes enabled — tighten rules before sharing beyond your test group.
- Unclear UX: show confidence and reasons. People trust a recommendation if it explains itself.
Real-world example & case study
Rebecca Yu's story (2024–2025 trend) inspired many micro-app builders: a small team built a Where2Eat-style app using LLM prompts and rapid cloud tools. In 2026 we see similar builders using desktop AI assistants (Anthropic's Cowork), autonomous prompt generation, and multi-LLM experiments to improve recommendation quality. The pattern works: fast iteration, private sharing, low cost, high utility.
Actionable checklist to finish in a weekend
- [ ] Create Firebase project, enable Auth/Firestore/Functions/Hosting
- [ ] Add LLM provider account and set function env vars
- [ ] Scaffold client with FirebaseUI and hosting
- [ ] Implement rooms, options, votes in Firestore
- [ ] Implement Cloud Function for recommendations + caching
- [ ] Add basic Firestore rules & test with invites
- [ ] Iterate prompt and UX using real users
- [ ] Deploy and celebrate
Future-proofing: trends to watch in 2026 and beyond
- LLM providers will add better on‑device and edge inference — reduce cost and latency.
- More no-code LLM connectors will appear in cloud consoles; still keep keys server-side.
- AI copilots and agents (like Cowork) will help non-devs tune prompts and generate small functions automatically.
Key takeaways
- Build fast: Firestore + Firebase Auth + Functions + LLMs = a powerful micro app stack.
- Keep secrets server-side: always call LLMs from Functions and cache responses.
- Iterate prompts: refinement often matters more than model choice.
- Protect data: apply rules, App Check, and minimal PII exposure.
Next steps & call to action
Ready to ship your micro dining app this weekend? Clone a starter repo (we provide a tested template with auth, Firestore wiring, and a Functions LLM bridge) and follow the 48–72 hour checklist above. Join our Firebase.live community to share your micro app, get feedback on prompts, and see examples of multi‑LLM setups (ChatGPT, Claude) tuned for group decisions.
Build, iterate, and share: start your micro app tonight — and stop losing time to “where should we eat?” forever.
Related Reading
- Deploying Offline-First Field Apps on Free Edge Nodes — 2026 Strategies for Reliability and Cost Control
- Field Test: Compact Streaming Rigs and Cache‑First PWAs for Pop‑Up Shops
- Cloud‑First Learning Workflows in 2026: Edge LLMs, On‑Device AI, and Zero‑Trust Identity
- Designing Cost‑Efficient Real‑Time Support Workflows in 2026: From Contact API v2 to Offline Fallbacks
- How to Start a Halal Pet Accessories Shop: Lessons from the Luxury Dog Clothing Boom
- Product Launch Alert: 13 Beauty Drops You Can't Miss and How to Score Them
- Creative 3D-Printed Nursery Decor: Mobiles, Nameplates, and Practical Helpers
- 17 Global Food Streets to Visit in 2026 (One from Each Top Destination)
- Serialized Challenges: Turn 30-Day Programs into a Mobile-First Episodic Experience
Related Topics
firebase
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
From Our Network
Trending stories across our publication group