We Run 3 Apps From 1 Firebase Project for $0/month. Here's the Architecture.

Most cloud consultants would tell you this setup is wrong.
We run three separate production applications from a single Firebase project. A public-facing marketing site. A private admin panel. A backend that handles authentication, data processing, and lead capture. The monthly hosting bill across all three: zero dollars.
Not "zero with startup credits." Not "zero for the first year." Just zero. This setup has been running in production for months.
This article isn't a Firebase tutorial. It's about the architectural thinking that lets a funded startup ship fast without burning runway on infrastructure nobody asked for.
The Problem With "Best Practices"
Walk into any Slack community for startup founders and ask how to host a web app. You'll get some variation of the same advice: Vercel for the frontend, a managed database on AWS or PlanetScale, a separate hosting provider for your admin tools, maybe a container service for backend logic.
Each one of those is a fine product. Together, they create something nobody talks about: infrastructure sprawl.
Suddenly you have four billing accounts, four dashboards, four sets of environment variables, cross-origin requests between services that used to be co-located, and the fun task of keeping authentication consistent across providers that have no native relationship with each other.
For a startup generating revenue with hundreds of thousands of daily active users? Sure, that level of separation makes sense. For a startup with a marketing site, an admin panel, and a handful of backend functions? It's overkill by an order of magnitude.
One Project, Three Applications
Firebase Hosting supports something called multi-site hosting, where a single Firebase project serves multiple web applications, each with its own domain and build output. They all share the same Firestore database, the same Authentication service, the same Cloud Storage bucket, and the same Cloud Functions.
This is the core architectural decision that makes everything else possible.
Our three applications have very different profiles. The public site needs to be fast, well-cached, and optimized for search engines. The admin panel needs to be functional and reliable but doesn't need public-facing performance tuning. The backend needs to handle sporadic server-side logic without running a persistent server.
In a multi-provider setup, making these three apps share an authentication state requires token passing, CORS configuration, and cross-origin session management. In a single Firebase project, they share auth natively. A user logged into the admin panel is authenticated against the same Firebase Auth instance that the backend functions validate against. No token exchange. No OAuth dance between your own services.
Why Not Vercel?
This question comes up every time we discuss this architecture, so let's address it directly.
Vercel is the better choice if you need server-side rendering, edge middleware, incremental static regeneration, or tight Next.js integration. If those are requirements for your product, use Vercel. It's excellent at what it does.
But for applications that compile down to static files, Vercel's advantages don't apply. You're paying for a deployment platform that runs a server you don't need. Firebase Hosting serves static files from a global CDN with automatic SSL on the free tier. Same end result, zero cost.
The bigger issue is co-location. When your frontend and backend live on different providers, every interaction between them involves a network hop across providers, separate deployment pipelines, and independent failure modes. Putting a Vercel frontend in front of a Firebase backend means your "simple" architecture now has two separate CI/CD pipelines, two sets of environment variables to keep in sync, and CORS headers to maintain whenever you add a new endpoint.
Co-location eliminates that entire category of problems. Your frontend calls your backend functions through the Firebase SDK. No HTTP endpoints to configure, no API gateway, no CORS. The SDK handles auth automatically. If it works locally, it works in production.
The Caching Strategy Nobody Thinks About
One of the most impactful decisions in any static hosting setup is how you configure cache headers. Most developers either ignore caching entirely or set a blanket policy. Both approaches leave performance on the table.
The strategy that works: immutable assets, volatile HTML.
Static assets like JavaScript bundles, CSS files, and images get cached for a full year with the immutable directive. The browser downloads them once and never checks back. This only works because modern build tools like Vite hash every filename based on content. When your code changes, the filename changes, and the browser fetches the new version naturally.
HTML files get the opposite treatment: max-age=0, must-revalidate. The browser always checks if the HTML has changed. Since HTML files reference the hashed asset filenames, a fresh HTML file automatically points to the latest JS and CSS bundles.
The result: returning visitors load your site almost entirely from cache. Only the HTML file (a few kilobytes) gets re-fetched. The JavaScript, CSS, images, and fonts all come from local cache instantly. For a marketing site where first impressions matter, this is the difference between a 200ms load and a 2-second load on repeat visits.
Monorepo Without the Monorepo Tools
All three applications live in a single repository. This isn't because monorepos are trendy. It's because shared types are a real requirement.
When your backend writes a document to the database and your frontend reads it, both sides need to agree on the shape of that data. In a multi-repo setup, you end up publishing a shared types package to npm, maintaining version numbers, and dealing with update propagation. In a monorepo, both apps import from the same shared/types directory. Change a type definition once, and TypeScript catches mismatches in both apps immediately.
We deliberately skipped build orchestration tools like Turborepo or Nx. These tools are valuable when you have complex dependency graphs between packages, shared build caches across teams, or more than a handful of packages that need coordinated builds. For our setup, path aliases do the job. The CMS resolves shared types through its framework's alias config. Backend functions resolve them through TypeScript path aliases with a post-compilation rewrite step.
If the repository grows to the point where build times become a bottleneck, adding Turborepo is a configuration change, not a rewrite. But premature tooling is its own form of technical debt. It adds cognitive overhead for every developer who touches the codebase, and it needs maintenance and upgrades independent of the product. We'd rather add it when the pain is real.
Path-Scoped Deployments
This might be the highest-leverage decision in the entire architecture.
In a standard CI/CD setup, pushing to main triggers a deployment of everything. A typo fix on the marketing site redeploys the backend. A database rule change rebuilds both frontends. This wastes time and introduces unnecessary risk.
Our approach: each application has its own GitHub Actions workflow, and each workflow watches specific directories. A change to the marketing site only triggers the marketing site deployment. A change to the backend only deploys backend functions. A change to shared types triggers both frontend workflows, because both apps consume those types.
The result is a deployment matrix where changes deploy exactly what they affect, nothing more. A Firestore rules update deploys only rules. A CSS change on the marketing site deploys only the marketing site. And all of this is just YAML configuration, no custom tooling required.
Infrastructure workflows also support manual dispatch, which is useful for forcing a clean deploy when you need to push a configuration change that doesn't involve code.
Backend: Functions, Not an API
The conventional wisdom for web backends is to build a REST API (or GraphQL, if you're feeling adventurous). For our use case, that's unnecessary abstraction.
We use Firebase Cloud Functions organized by the application they serve. There's no router, no middleware stack, no express server. Each function is a standalone unit that handles one operation: capturing a lead, managing a user, synchronizing auth state.
The frontend calls these functions through Firebase's client SDK using callable functions. The SDK handles authentication headers, request serialization, and error handling automatically. It's less code than setting up axios with interceptors, and it guarantees that the auth token attached to every request is valid.
For operations that need to happen in response to data changes, we use Firestore triggers. When certain documents change, a trigger fires and handles the side effect. This keeps business logic centralized in the backend rather than scattered across frontend applications.
One design choice worth highlighting: we use the Firebase Admin SDK in our backend functions, which bypasses security rules entirely. This is intentional. Security rules protect client-side access. Server-side functions have full control over what data gets persisted and how. This separation means we can have strict rules preventing direct client writes to sensitive collections while still allowing our backend to write to those collections freely.
Build Optimization for Landing Pages
The marketing site is a React SPA built with Vite. Not Next.js. Not Remix. Not Astro. For a site where every page is essentially static content with interactive elements, a well-optimized SPA with code splitting is the right tool.
The build configuration makes a few deliberate choices.
Vendor chunk splitting puts the framework libraries into their own bundle. This chunk changes rarely, so it stays cached for the long term. Application code changes frequently, but users don't re-download React on every deploy.
Dropping console statements and debugger calls at build time through esbuild's built-in transform means developers can use console.log freely during development with zero production impact. No lint rules to enforce, no conditional logging utilities to maintain.
Zero-dependency animations. The site runs scroll-triggered fades, parallax effects, and stagger animations entirely through the IntersectionObserver API and CSS transitions. No Framer Motion, no GSAP, no animation library. Framer Motion alone adds 30-40KB gzipped. For a marketing site where first-load performance directly impacts conversion, that weight isn't justified when native browser APIs do the same job.
Lazy-loaded routes ensure the initial bundle only contains the home page. Every other page loads on navigation. Since the vast majority of traffic hits the home page, this keeps the critical path as small as possible.
Pre-rendered Static HTML for SEO
A React SPA has a fundamental SEO problem: the initial HTML response is an empty shell. Google's crawler can execute JavaScript, but it does so in a separate rendering queue with lower priority. Social media crawlers (LinkedIn, Twitter, Facebook) don't execute JavaScript at all. When someone shares your blog post on LinkedIn, the crawler fetches raw HTML, sees your homepage meta tags, and uses those for the preview.
We solve this with build-time pre-rendering. After Vite outputs the SPA, a script launches a headless browser, navigates to every route, waits for React to render, then captures the full HTML and writes it as a static file.
When Firebase Hosting receives a request, it checks for a matching static file before falling back to the SPA shell. Pre-rendered pages get served directly: full HTML, correct meta tags, structured data, blog content, everything baked in. The JavaScript bundles still load in parallel, and React hydrates the existing DOM rather than re-rendering from scratch.
The end result: Google indexes the full content without depending on JavaScript rendering. Social media previews show the correct title, description, and image for each page. And users see content immediately instead of waiting for JavaScript to boot.
The Cost Breakdown
Here's the honest math:
| Service | Free Tier Allocation | Our Usage |
|---|---|---|
| Firebase Hosting | 10GB storage, 360MB/day transfer | Well within limits |
| Cloud Functions | 2M invocations/month | A few hundred/month |
| Firestore | 1GB storage, 50K reads/day | A few thousand reads/day |
| Cloud Storage | 5GB | Minimal file uploads |
| Firebase Auth | Unlimited for email/password | Handful of admin users |
| GitHub Actions | 2,000 min/month free | ~30 min/month |
| Monthly Total | $0 |
This works because an agency website generates modest traffic. Thousands of monthly visitors, not millions. Hundreds of form submissions, not millions of API calls. Firebase's free tiers are sized for exactly this kind of usage.
When usage grows past the free tier, Firebase's pay-as-you-go pricing kicks in. But by that point, the product is generating enough revenue to justify the cost. The pricing is predictable and scales linearly with usage. No surprise bills, no reserved instances to manage.
When to Outgrow This Architecture
This setup has clear limits. Recognizing them is as important as understanding the benefits.
You need SSR or edge computing. If your product requires server-side rendering for performance or SEO at scale, you need a framework and platform designed for it. Firebase Hosting serves static files. It doesn't run code at the edge.
Your backend becomes complex. A handful of callable functions and triggers work well for straightforward server-side logic. Once you're looking at background job queues, complex workflows, inter-service communication, or processing pipelines, you need dedicated compute.
Your traffic demands dedicated infrastructure. Firebase's free tier handles thousands of daily visitors comfortably. Hundreds of thousands of daily active users require capacity planning, load testing, and likely a dedicated database. That's a different architectural conversation.
The migration path from Firebase to a more complex setup is well-documented and incremental. You can move your frontend to Vercel while keeping your backend on Firebase. You can migrate Firestore to a managed PostgreSQL instance. These aren't rewrites. They're replacements of individual components.
The mistake most startups make is building for scale they don't have yet. Every dollar spent on infrastructure that isn't needed is a dollar not spent on product, marketing, or hiring. Start with what works. Scale when the numbers demand it.
The Takeaway
The architecture described here took a single day to set up. Five CI/CD workflows. Three applications. Shared types and shared authentication. Zero monthly cost.
The point isn't that Firebase is the answer for every startup. The point is that your infrastructure cost should be proportional to your actual usage, and most startups are dramatically over-provisioned relative to their traffic. If your marketing site gets a few thousand visitors a month and your backend handles a few hundred operations a day, you do not need Kubernetes. You do not need a managed database cluster. You do not need a container orchestration platform.
You need a CDN that serves static files, a database that scales to zero, a way to run server-side logic without a persistent server, and CI/CD that deploys only what changed. The tools that provide this at zero cost already exist. The hard part isn't finding them. It's having the discipline to not over-engineer before the product demands it.
Want to discuss architecture?
We help funded startups make the right technical decisions from day one.