Advanced Supabase: RLS, Database Triggers, and Edge Functions
Deeper Supabase: write real Row Level Security policies, set up database triggers that fire n8n workflows, and deploy your first Edge Function.
If you’ve worked through the basics, you know how to create tables, run queries, and connect Supabase to n8n. This article covers the layer underneath — the features that make Supabase a genuinely serious data platform rather than just a convenient place to store rows.
These are the things I had to figure out the hard way while building SendJob: proper RLS policies for user-facing data, relational schemas with foreign keys, database functions I could call from n8n, and most powerfully, database webhooks that push changes directly into my automation workflows without any polling.
This is also where Supabase earns its place in the stack versus a simpler tool.
Row Level Security Deep Dive
The basics article told you to use the service role key in n8n to bypass RLS for server-side automations. That’s still correct. But RLS matters the moment you have any kind of user-facing interface — a customer portal, a technician mobile app, a client dashboard — where different users should only see their own data.
The three policy types:
RLS policies are written per operation. You write separate policies for SELECT, INSERT, UPDATE, and DELETE. A user can only perform an operation if a matching policy exists and evaluates to true for their row.
In Supabase, you write policies in the Table Editor under the “Auth Policies” tab, or directly in the SQL Editor. SQL is more reliable for complex policies.
The auth.uid() function:
When a user makes a request with an authenticated JWT token (from Supabase Auth), auth.uid() returns their user ID. This is the key to row-level access control.
Here’s a real policy: “A customer can only read their own records.”
CREATE POLICY "Customers can view own record"
ON customers
FOR SELECT
USING (auth.uid() = user_id);
This assumes your customers table has a user_id column that references the Supabase Auth user. When a logged-in user queries the customers table, this policy filters results to only rows where user_id matches their authenticated user ID. Any rows belonging to other users are invisible — not forbidden with an error, just absent.
INSERT policies use WITH CHECK instead of USING:
CREATE POLICY "Users can insert their own records"
ON customers
FOR INSERT
WITH CHECK (auth.uid() = user_id);
USING controls which existing rows you can see or modify. WITH CHECK controls what new rows you’re allowed to create. For INSERT, you can’t use USING — there’s no existing row to evaluate against.
When service role bypasses RLS — and when that’s dangerous:
The service role key bypasses all RLS policies. That’s exactly what you want for n8n automations running server-side logic where you intentionally need to access any record. It’s dangerous if the service role key leaks into client-side code — a browser or mobile app — where users could potentially use it directly. Keep the service role key strictly in n8n and any other backend services. Never expose it to the client side.
Foreign Keys and Relationships
A flat table is fine for simple data. A real application has relational data, and that means foreign keys.
In SendJob, every job belongs to a customer. The jobs table has a customer_id column that references the id column of the customers table. This is a foreign key relationship.
Here’s the SQL to create it:
CREATE TABLE jobs (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
customer_id uuid NOT NULL REFERENCES customers(id) ON DELETE CASCADE,
status text NOT NULL DEFAULT 'pending',
address text,
scheduled_at timestamptz,
created_at timestamptz NOT NULL DEFAULT now()
);
The key part: REFERENCES customers(id). This tells PostgreSQL that every value in customer_id must correspond to an existing id in the customers table. If you try to insert a job with a customer_id that doesn’t exist in customers, the database rejects it. Data integrity is enforced at the database level, not just in your application code.
ON DELETE CASCADE:
This means if you delete a customer, all their jobs are automatically deleted too. The alternative is ON DELETE RESTRICT (the default), which prevents you from deleting a customer while they still have jobs. Which behavior you want depends on your application logic — in most cases, CASCADE is more practical.
Querying related tables with JOIN:
To get a job along with the customer’s name and email:
SELECT
jobs.id,
jobs.status,
jobs.address,
customers.name,
customers.email
FROM jobs
JOIN customers ON jobs.customer_id = customers.id
WHERE jobs.status = 'open';
When you run this query from n8n via the Supabase REST API, the response includes all the joined fields flattened into a single object per row. n8n handles this naturally — all the fields are available as properties on $json.
Supabase also supports PostgREST’s embedded resource syntax for fetching related data through the REST API without writing raw SQL, but for n8n automations, running SQL directly through the Supabase node’s “Execute Query” operation gives you the most control.
Database Functions
PostgreSQL lets you write functions — stored procedures that run inside the database. These are useful when you want to encapsulate logic that operates close to the data, or when you want to call a specific computation from n8n via a single RPC call.
Here’s a function that counts open jobs for a given technician:
CREATE OR REPLACE FUNCTION get_open_job_count(tech_id uuid)
RETURNS integer
LANGUAGE sql
AS $$
SELECT COUNT(*)::integer
FROM jobs
WHERE technician_id = tech_id
AND status = 'open';
$$;
To call this from n8n, use the Supabase node with Operation set to “RPC.” Set the function name to get_open_job_count and pass tech_id as a parameter. Supabase routes this to POST /rest/v1/rpc/get_open_job_count behind the scenes.
This is cleaner than writing a complex query inline in n8n every time you need that count. The logic lives in the database, stays consistent across any tool that calls it, and is easier to update in one place.
Functions can be much more complex than this — they can modify data, call other functions, raise exceptions. But start simple. The pattern is the same regardless of complexity.
Database Webhooks: Pushing Changes to n8n
This is the feature that changed how I think about automation architecture, and it’s one of the most underused capabilities in Supabase.
By default, n8n has to poll Supabase to find out if anything has changed. You set up a schedule trigger that runs every minute, queries for new records, and processes them. It works, but it’s inefficient and introduces latency.
With Supabase Database Webhooks, the database pushes to n8n. When a row is inserted, updated, or deleted, Supabase immediately fires a POST request to a URL you specify — your n8n webhook.
Setting it up:
In Supabase, go to Database → Webhooks → Create a new webhook.
Configure:
- Name — something descriptive like
job_status_changed - Table —
jobs - Events — select “UPDATE” (or INSERT, DELETE, or all three)
- Webhook URL — your n8n webhook URL (from a Webhook trigger node in n8n)
- HTTP method — POST
Save it. Now create a matching Webhook trigger node in n8n set to listen at that URL.
When any job’s status is updated in Supabase, Supabase immediately POSTs a payload to n8n containing the old row values, the new row values, and the table name. Your n8n workflow wakes up, examines the change, and takes action — sending an SMS, updating another record, whatever the logic requires.
The payload looks like this:
{
"type": "UPDATE",
"table": "jobs",
"record": { "id": "...", "status": "enroute", "customer_id": "..." },
"old_record": { "id": "...", "status": "pending", "customer_id": "..." }
}
In n8n, you can compare record.status vs old_record.status to know exactly what changed and respond accordingly. A job moving from pending to enroute triggers an ETA SMS. A job moving from enroute to onsite triggers an arrival notification. The logic stays clean in n8n. The trigger fires reliably from the database.
This pattern eliminates polling entirely for event-driven workflows.
Edge Functions
Supabase Edge Functions are serverless TypeScript functions that run at Supabase’s edge infrastructure. They’re deployed with the Supabase CLI and run on Deno.
When to use them:
- When you need server-side logic that’s tightly coupled to your database — validation, transformation before insert, calling other APIs based on database state
- When n8n is not the right tool — maybe the operation needs to complete synchronously as part of a database transaction, or you need sub-millisecond response time
- When you want to expose a custom API endpoint for a specific operation
A practical example: validating and sanitizing form data before it hits your database.
// supabase/functions/create-customer/index.ts
import { serve } from "https://deno.land/std@0.168.0/http/server.ts"
import { createClient } from "https://esm.sh/@supabase/supabase-js@2"
serve(async (req) => {
const { name, email, phone } = await req.json()
// Validate
if (!name || !email) {
return new Response(
JSON.stringify({ error: "name and email are required" }),
{ status: 400, headers: { "Content-Type": "application/json" } }
)
}
// Sanitize phone to E.164 format
const cleanPhone = phone
? '+1' + phone.replace(/\D/g, '').slice(-10)
: null
const supabase = createClient(
Deno.env.get("SUPABASE_URL") ?? "",
Deno.env.get("SUPABASE_SERVICE_ROLE_KEY") ?? ""
)
const { data, error } = await supabase
.from('customers')
.insert({ name, email, phone: cleanPhone })
.select()
.single()
if (error) {
return new Response(JSON.stringify({ error: error.message }), { status: 500 })
}
return new Response(JSON.stringify(data), { status: 201 })
})
Deploy with: supabase functions deploy create-customer
The function runs at https://abcdefg.supabase.co/functions/v1/create-customer. You can call it from n8n, from a form, or from anywhere that can make an HTTP request.
Edge Functions have access to your Supabase database through the standard JS client and can use environment variables you set in the Supabase dashboard. They are not a replacement for n8n — they’re a complement for specific use cases where you need logic that runs before or alongside database operations.
Realtime Subscriptions
Supabase can broadcast database changes to connected clients in real time using WebSockets. When a row changes, any client subscribed to that table receives the update instantly — no polling, no refresh.
The primary use case for this is front-end interfaces:
- A live job dispatch board that updates as technicians change job statuses
- An admin panel showing real-time fleet location
- A customer portal that shows live job progress
Here’s a simple JavaScript example:
const supabase = createClient(url, anonKey)
const subscription = supabase
.channel('jobs-changes')
.on(
'postgres_changes',
{ event: 'UPDATE', schema: 'public', table: 'jobs' },
(payload) => {
console.log('Job updated:', payload.new)
// Update your UI here
}
)
.subscribe()
Realtime is more front-end-focused than the rest of what we’ve covered. But it’s worth knowing it exists. If you ever build a live dashboard — and most field service operations benefit enormously from one — realtime subscriptions are the mechanism.
Storage
Supabase includes S3-compatible file storage. You create “buckets” (containers for files), define access policies, and upload files through the API or Supabase’s client library.
In SendJob, I use storage for:
- Job site photos uploaded by technicians
- Signed service agreements
- Invoice PDFs
Creating a bucket in Supabase is a few clicks. Access policies work similarly to RLS — you can make a bucket public (any URL works for reading files) or private (require authentication). For most business applications, private buckets with signed URLs are appropriate.
When you generate a signed URL, it’s valid for a time window you specify — useful for sending customers a link to their invoice that expires after 24 hours.
Database Indexing
This is the performance topic most builders ignore until their queries get slow in production. Here’s what you actually need to know.
An index is a data structure that lets PostgreSQL find rows matching a WHERE clause without scanning every row in the table. For small tables, it doesn’t matter. For tables with tens of thousands of rows and queries running multiple times per second, indexes are the difference between a fast application and an unusable one.
Two indexes every production database needs:
-
Primary key index — PostgreSQL creates this automatically. Every
idcolumn is indexed. -
Indexes on frequently filtered columns — any column you regularly use in a WHERE clause.
In SendJob, I added an index on jobs.status because the most common query pattern is WHERE status = 'open'. Without it, every query scans the entire jobs table. With it, PostgreSQL jumps directly to the matching rows.
CREATE INDEX idx_jobs_status ON jobs(status);
Also worth indexing: customer_id on the jobs table (since you’re constantly joining or filtering by it), and any column you use in ORDER BY clauses on large result sets.
How to tell if you need an index:
If a query feels slow, run EXPLAIN ANALYZE in the SQL Editor before the query:
EXPLAIN ANALYZE
SELECT * FROM jobs WHERE status = 'open';
Look for “Seq Scan” in the output — that means a full table scan, which is what indexes prevent. After adding an index, run it again and look for “Index Scan” instead.
Don’t over-index. Every index you add slightly slows down writes (INSERT, UPDATE, DELETE) because PostgreSQL has to maintain the index alongside the data. Index the columns you filter on frequently. Leave the rest alone.
Next in the stack: What Is Twilio? → — adding SMS to your automations.