Back to Blog
infrastructureInfrastructureGCPSupabasePulumiDevOps

Why We Self-Host Supabase on GCP (And How We Did It)

Supabase is great, but managed hosting isn't always the answer. Here's why we chose to self-host on GCP and built infrastructure-as-code with Pulumi.

January 5, 2026
8 min read
Pulore Team
Why We Self-Host Supabase on GCP (And How We Did It)

Why We Self-Host Supabase on GCP (And How We Did It)

Supabase has become one of our favorite tools for building applications quickly. The combination of PostgreSQL, real-time subscriptions, authentication, and a beautiful Studio interface is hard to beat.

But for a recent client project, we couldn't use Supabase's managed hosting. The requirements were clear: EU data residency, VPN-only access to the database, enterprise-grade PostgreSQL, and full control over the infrastructure.

So we self-hosted it on GCP. Here's what we learned.

The honest tradeoff

Self-hosting gives you control. It also gives you responsibility.

Let's be clear about what you're signing up for. Managed Supabase handles:

  • Database backups and point-in-time recovery
  • Security patches and updates
  • Scaling and high availability
  • Monitoring and alerting
  • SSL certificates and networking

When you self-host, all of that becomes your job. For many projects, especially MVPs and early-stage startups, managed hosting is absolutely the right choice. Ship fast, validate, and don't spend engineering time on infrastructure.

But sometimes managed doesn't work. And that's when self-hosting makes sense.

When self-hosting makes sense

Here's when we recommend going down this path:

1. Data residency requirements

Our client needed all data to stay in the EU, specifically in London (europe-west2). While Supabase does offer EU regions, the client wanted the database on their own GCP project for compliance and audit purposes.

2. Network isolation

The database should never be accessible from the public internet. Period. We needed VPN-only access with Cloud Armor providing an additional layer of protection.

3. Enterprise PostgreSQL features

GCP's Cloud SQL Enterprise Plus tier offers features that are hard to replicate:

  • Data Cache — Automatic caching layer for frequently accessed data
  • Performance-optimized instances — Up to 64 vCPUs and 512GB RAM
  • Query Insights — Built-in query performance monitoring
  • 99.99% SLA — With regional (multi-zone) deployments

4. Cost predictability at scale

At a certain scale, the economics shift. When you're paying for enterprise PostgreSQL anyway, adding Supabase Studio (which is open source) costs virtually nothing extra.

Our architecture

Here's what we built:

┌─────────────────────────────────────────────────────────────────┐
│                         Internet                                 │
└─────────────────────────────────────────────────────────────────┘
                                │
                    ┌───────────┴───────────┐
                    │                       │
              ┌─────▼─────┐          ┌──────▼──────┐
              │    VPN    │          │ Cloud Armor │
              │ (Outline) │          │  (Whitelist)│
              └─────┬─────┘          └──────┬──────┘
                    │                       │
                    │         ┌─────────────▼─────────────┐
                    │         │    Global Load Balancer   │
                    │         │    (SSL Termination)      │
                    │         └─────────────┬─────────────┘
                    │                       │
              ┌─────▼───────────────────────▼─────┐
              │              VPC                   │
              │  ┌────────────────────────────┐   │
              │  │   Managed Instance Group   │   │
              │  │   (Supabase Studio +       │   │
              │  │    postgres-meta)          │   │
              │  └────────────┬───────────────┘   │
              │               │                   │
              │  ┌────────────▼───────────────┐   │
              │  │   Cloud SQL PostgreSQL 18  │   │
              │  │   (Enterprise Plus)        │   │
              │  │   Private IP only          │   │
              │  └────────────────────────────┘   │
              └───────────────────────────────────┘

Key components:

  • VPN (Outline) — Self-hosted on a small VM, provides secure access for developers
  • Cloud Armor — Whitelists only the VPN server IP and GCP health check ranges
  • Global Load Balancer — Handles SSL termination with Google-managed certificates
  • Managed Instance Group — Auto-scaling, auto-healing Supabase Studio containers
  • Cloud SQL Enterprise Plus — PostgreSQL 18 with private networking only

Infrastructure as Code with Pulumi

We chose Pulumi over Terraform for one simple reason: TypeScript.

With Pulumi, we get real programming constructs — loops, conditionals, type safety, IDE autocomplete. No more HCL string interpolation gymnastics.

Here's how we structured the code:

├── components/          # Reusable infrastructure modules
│   ├── database.ts      # Cloud SQL component
│   ├── networking.ts    # VPC, subnets, firewall rules
│   ├── loadbalancer.ts  # Global LB with SSL
│   ├── vpn.ts          # Outline VPN server
│   └── service-account.ts
├── services/            # Application deployments
│   └── supabase-studio/ # Studio + postgres-meta
├── config/              # Environment configuration
│   └── secrets.ts       # Infisical integration
└── index.ts            # Main entry point

Component-based architecture

Each infrastructure component is a Pulumi ComponentResource that encapsulates related resources. Here's a simplified version of our database component:

export const DatabasePresets = {
  /** Development: 4 vCPUs, 32 GB RAM, Single-zone */
  dev: {
    tier: "db-perf-optimized-N-4",
    edition: "ENTERPRISE_PLUS",
    availabilityType: "ZONAL",
    diskSize: 250,
    dataCacheEnabled: true,
    deletionProtection: false,
  },
  /** Production: 8 vCPUs, 64 GB RAM, Multi-zone HA */
  prod: {
    tier: "db-perf-optimized-N-8",
    edition: "ENTERPRISE_PLUS",
    availabilityType: "REGIONAL",
    diskSize: 250,
    dataCacheEnabled: true,
    deletionProtection: true,
  },
};

We define presets for each environment, then use them like this:

const database = new Database("database", {
  name: resourcePrefix,
  region: config.region,
  network: networking.vpc,
  privateVpcConnection: networking.privateVpcConnection,
  databaseName: secrets.apply(s => s.infra.postgresDb),
  password: secrets.apply(s => s.infra.postgresPassword),
  preset: config.environment as DatabasePreset, // "dev" or "prod"
});

The preset provides sensible defaults, but any setting can be overridden. This pattern gives us consistency across environments while allowing flexibility when needed.

Private networking

One of our non-negotiables was keeping the database off the public internet. Cloud SQL supports private IP through VPC peering:

// Reserve an IP range for the private connection
this.privateIpRange = new gcp.compute.GlobalAddress(`${resourceName}-private-ip-range`, {
  purpose: "VPC_PEERING",
  addressType: "INTERNAL",
  prefixLength: 16,
  network: this.vpc.id,
});
 
// Create the private connection to Google's service networking
this.privateVpcConnection = new gcp.servicenetworking.Connection(`${resourceName}-private-vpc-connection`, {
  network: this.vpc.id,
  service: "servicenetworking.googleapis.com",
  reservedPeeringRanges: [this.privateIpRange.name],
});

With this in place, the Cloud SQL instance gets a private IP within our VPC. No public IP is ever assigned.

VPN for secure access

We deployed Outline VPN (from the Jigsaw team at Google) on a small e2-small instance. It's lightweight, easy to manage, and works great with the Outline client apps.

The clever bit is the Cloud Armor integration. We whitelist only the VPN server's public IP:

securityPolicy = new gcp.compute.SecurityPolicy(`${resourceName}-security-policy`, {
  rules: [
    // Default: deny all
    {
      action: "deny(403)",
      priority: 2147483647,
      match: {
        versionedExpr: "SRC_IPS_V1",
        config: { srcIpRanges: ["*"] },
      },
    },
    // Allow GCP health checks
    {
      action: "allow",
      priority: 1000,
      match: {
        versionedExpr: "SRC_IPS_V1",
        config: {
          srcIpRanges: ["35.191.0.0/16", "130.211.0.0/22"],
        },
      },
    },
    // Allow VPN server
    {
      action: "allow",
      priority: 900,
      match: {
        versionedExpr: "SRC_IPS_V1",
        config: {
          srcIpRanges: [pulumi.interpolate`${vpnPublicIp}/32`],
        },
      },
    },
  ],
});

Now Supabase Studio is only accessible to team members connected to the VPN. Even if someone finds the URL, they can't access it without VPN credentials.

Secret management with Infisical

We use Infisical for secret management. It's like Doppler or HashiCorp Vault, but with a cleaner developer experience.

Secrets are fetched at deployment time and injected into the infrastructure:

export function fetchSecrets(
  clientId: pulumi.Output<string>,
  clientSecret: pulumi.Output<string>,
  environment: string,
): pulumi.Output<AllSecrets> {
  return pulumi.all([clientId, clientSecret]).apply(async ([id, secret]) => {
    const client = new InfisicalSDK({
      siteUrl: "https://eu.infisical.com",
    });
 
    await client.auth().universalAuth.login({
      clientId: id,
      clientSecret: secret,
    });
 
    const infraSecrets = await client.secrets().listSecrets({
      projectId: INFRA_PROJECT_ID,
      environment: environment,
      secretPath: "/",
    });
 
    // Return structured secrets
    return {
      infra: {
        postgresDb: getSecretValue(infraSecrets, "POSTGRES_DB"),
        postgresPassword: getSecretValue(infraSecrets, "POSTGRES_PASSWORD"),
      },
      // ... more secrets
    };
  });
}

This keeps secrets out of our Git repository and Pulumi state file while still being fully automated.

Environment-specific deployments

We run two environments: dev and prod. Same code, different configurations:

SettingDevProd
Database tier4 vCPU / 32 GB8 vCPU / 64 GB
AvailabilitySingle-zoneMulti-zone (HA)
Min instances12
Deletion protectionOffOn
Subnet CIDR10.0.1.0/2410.1.1.0/24

Switching environments is as simple as:

pulumi stack select dev
pulumi up
 
# or
 
pulumi stack select prod
pulumi up

What we haven't covered (yet)

This post focused on the core architecture. There's more to come:

Multi-region read replicas

For our next iteration, we're adding read replicas in US and Australia. Cloud SQL supports cross-region replicas, and we'll integrate them with Prisma's read replica support for automatic query routing.

Cost breakdown

We'll share a detailed cost analysis comparing self-hosted vs. managed Supabase at various scales.

Monitoring and alerting

We're using GCP's built-in monitoring, but there's a lot more we could say about setting up proper observability.

Lessons learned

1. Start with presets

Defining environment presets (dev/prod) upfront saved us countless hours. Every new resource follows the same pattern: sensible defaults with override capability.

2. VPN is worth the complexity

Yes, it's another component to maintain. But for internal tools, the security benefits outweigh the overhead. Plus, Outline is remarkably low-maintenance.

3. Test startup scripts locally

Our Container-Optimized OS instances run startup scripts that pull and configure Docker containers. Testing these locally (using Docker and a mock environment) caught issues that would have been painful to debug on live instances.

4. Private networking takes time

The VPC peering for Cloud SQL private IP can take 5-10 minutes to provision. Plan for this in your deployment scripts and don't be alarmed when pulumi up seems to hang.

5. Cloud Armor needs health check IPs

This caught us initially — we denied all traffic including GCP's health check probes. Make sure you whitelist 35.191.0.0/16 and 130.211.0.0/22.

When to go this route

Self-hosting Supabase on GCP makes sense when you:

  • Need data residency or compliance controls
  • Want enterprise PostgreSQL features (Data Cache, high SLA)
  • Require VPN-only or IP-restricted access
  • Have the engineering capacity to manage infrastructure
  • Are at a scale where managed pricing becomes significant

For everyone else, especially early-stage projects, stick with managed Supabase. It's excellent, and your time is better spent on your product.


Building infrastructure for a project with specific compliance or security requirements? Let's talk — we can help you design and deploy a setup that fits your needs.

Pulore Team
Engineering
Share:

Want to discuss this topic?

We love talking about software architecture, development best practices, and technical strategy.