Build It: A Job Tracking Database for a Service Business
A complete walkthrough: design and build the Supabase schema that powers a real field service operation — customers, jobs, technicians, foreign keys, indexes, and database webhooks that trigger n8n automations.
This is the build. No more theory. We’re going to design and wire up the exact database schema that underpins a service business automation system — the same pattern used in SendJob.
By the end of this walkthrough, you’ll have a working Supabase database with three related tables, proper indexes, and a database webhook that fires an n8n workflow the moment a job status changes. The n8n workflow will read the job and customer data together in a single query and send an SMS to the customer using Twilio.
If you don’t have Twilio set up yet, you can follow everything up to that step and substitute a console log or email notification. The database and webhook portions are the main event.
What We’re Building and Why
A service business — HVAC, plumbing, electrical, pest control — has a predictable data model. Every operation involves:
- A customer with a name, email, and phone number
- A job assigned to that customer, with a status, an address, and a scheduled time
- A technician who is assigned to the job and goes to the site
The automation layer watches for changes to job status. When a job moves from pending to enroute, the customer gets a text. When it moves to onsite, they get another. When it moves to complete, a payment link goes out.
None of that automation works without a clean, relational database underneath it. That’s what we’re building today.
Prerequisites
Before you start, make sure you have:
- Supabase project — free tier is fine. If you don’t have one, go to supabase.com, sign up, and create a new project. Wait for it to provision (about 60 seconds).
- n8n account — cloud at n8n.io or self-hosted. You’ll need an active instance to receive the database webhook.
- Twilio account (optional) — only needed for the SMS step at the end. You can complete everything else without it.
You should already be comfortable with the Supabase dashboard. If the SQL Editor and Table Editor feel unfamiliar, work through the Basics guide first.
Step 1: Design the Schema Before You Build It
This is the step most builders skip, and it’s why they end up rebuilding their database three times.
Before touching the Supabase dashboard, draw out what the data looks like. Here’s ours:
customers
id— uuid, primary keyname— text, not nullemail— text, not null, uniquephone— text, nullablecreated_at— timestamptz, default now()
technicians
id— uuid, primary keyname— text, not nullphone— text, nullableactive— boolean, default truecreated_at— timestamptz, default now()
jobs
id— uuid, primary keycustomer_id— uuid, foreign key → customers(id)technician_id— uuid, foreign key → technicians(id), nullable (not assigned yet when created)status— text, not null, default ‘pending’address— text, not nullscheduled_at— timestamptz, nullablenotes— text, nullablecreated_at— timestamptz, default now()updated_at— timestamptz, default now()
The relationships:
- Each job belongs to one customer (many jobs per customer)
- Each job can be assigned to one technician (many jobs per technician)
- A technician or customer can exist with no jobs — that’s fine
The status column on jobs is the control column. Every automation in the system watches it. Valid values: pending, assigned, enroute, onsite, complete, cancelled.
The updated_at column will be automatically maintained by a database trigger we’ll add in Step 3. Whenever a job row changes, updated_at reflects the moment it changed. This is essential for debugging — you’ll always know when a record was last touched.
Step 2: Create the Tables
Go to your Supabase dashboard and open the SQL Editor. We’ll create all three tables with a single script.
Copy and paste this entire block and run it:
-- Customers table
CREATE TABLE customers (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
name text NOT NULL,
email text NOT NULL UNIQUE,
phone text,
created_at timestamptz NOT NULL DEFAULT now()
);
-- Technicians table
CREATE TABLE technicians (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
name text NOT NULL,
phone text,
active boolean NOT NULL DEFAULT true,
created_at timestamptz NOT NULL DEFAULT now()
);
-- Jobs table
CREATE TABLE jobs (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
customer_id uuid NOT NULL REFERENCES customers(id) ON DELETE CASCADE,
technician_id uuid REFERENCES technicians(id) ON DELETE SET NULL,
status text NOT NULL DEFAULT 'pending'
CHECK (status IN ('pending','assigned','enroute','onsite','complete','cancelled')),
address text NOT NULL,
scheduled_at timestamptz,
notes text,
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now()
);
A few things worth noticing:
The CHECK constraint on status: CHECK (status IN ('pending','assigned','enroute','onsite','complete','cancelled')) means the database will reject any insert or update that tries to set status to something outside this list. This is data integrity enforced at the database level — your automation can’t accidentally set a job to "in-progress" or "done" or any other variant. The valid states are defined once, enforced everywhere.
ON DELETE CASCADE vs ON DELETE SET NULL: When you delete a customer, their jobs are deleted too (CASCADE). When you delete a technician, their jobs still exist but technician_id is set to NULL (SET NULL) — the jobs don’t disappear just because a technician left.
Foreign key on customer_id is NOT NULL: A job must always belong to a customer. You can’t create a job in the database without a valid customer_id.
Foreign key on technician_id IS nullable: A job can be created before a technician is assigned. The job starts as pending with no technician. The dispatcher assigns one later.
Go to the Table Editor and confirm all three tables appear. Click into each one and verify the columns look right.
Step 3: Add the updated_at Trigger
PostgreSQL doesn’t automatically update updated_at when a row changes. You have to write a trigger function that does it.
Back in the SQL Editor, run this:
-- Function to update the updated_at column
CREATE OR REPLACE FUNCTION set_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Attach the trigger to the jobs table
CREATE TRIGGER jobs_set_updated_at
BEFORE UPDATE ON jobs
FOR EACH ROW
EXECUTE FUNCTION set_updated_at();
What this does: every time a job row is updated, before the update is written, the set_updated_at function fires and sets updated_at to the current timestamp. The NEW keyword refers to the new version of the row that’s about to be written.
Test it to make sure it works. First, insert a test job (you’ll need a customer first):
-- Insert a test customer
INSERT INTO customers (name, email, phone)
VALUES ('Jane Smith', 'jane@example.com', '+15551234567')
RETURNING id;
Copy the returned UUID. Then insert a job using that ID:
-- Replace the customer_id with the UUID from above
INSERT INTO jobs (customer_id, address, status)
VALUES ('paste-uuid-here', '123 Main St, Orlando FL', 'pending')
RETURNING id, status, created_at, updated_at;
Note the created_at and updated_at — they should be identical right now. Now update the job status:
UPDATE jobs SET status = 'assigned' WHERE status = 'pending';
Run this query to check the timestamps:
SELECT id, status, created_at, updated_at FROM jobs;
updated_at should now be later than created_at. The trigger is working.
Step 4: Add Indexes for Performance
Your tables are small right now, so queries are instant regardless. But you’re building for production, and queries need to stay fast as the data grows.
Add indexes on the columns you’ll filter and join on most often:
-- Filter jobs by status (the most common query in a dispatch system)
CREATE INDEX idx_jobs_status ON jobs(status);
-- Join or filter jobs by customer
CREATE INDEX idx_jobs_customer_id ON jobs(customer_id);
-- Join or filter jobs by technician
CREATE INDEX idx_jobs_technician_id ON jobs(technician_id);
-- Filter technicians by active status
CREATE INDEX idx_technicians_active ON technicians(active);
That’s it. The id columns are already indexed (primary key). These four indexes cover the query patterns that will dominate this system.
To verify the indexes exist, run:
SELECT indexname, indexdef
FROM pg_indexes
WHERE tablename IN ('jobs', 'customers', 'technicians')
ORDER BY tablename, indexname;
You should see your four new indexes plus the primary key indexes Supabase created automatically.
Step 5: Seed Some Test Data
Before wiring up the automation, put realistic test data in so you have something to work with.
-- Add a technician
INSERT INTO technicians (name, phone)
VALUES ('Mike Johnson', '+15559876543')
RETURNING id;
Copy the technician UUID. Then add a second customer and a job assigned to Mike:
-- Add another customer
INSERT INTO customers (name, email, phone)
VALUES ('Robert Garcia', 'robert@example.com', '+15553334444')
RETURNING id;
Copy that customer UUID. Now create a job with both:
-- Create an assigned job (replace both UUIDs)
INSERT INTO jobs (customer_id, technician_id, address, status, scheduled_at, notes)
VALUES (
'customer-uuid-here',
'technician-uuid-here',
'456 Oak Ave, Tampa FL',
'assigned',
now() + interval '2 hours',
'Annual AC maintenance. Customer prefers a call 30 minutes before arrival.'
)
RETURNING *;
Now run a JOIN query to see the full picture — job data, customer data, and technician data in one result:
SELECT
j.id AS job_id,
j.status,
j.address,
j.scheduled_at,
c.name AS customer_name,
c.phone AS customer_phone,
c.email AS customer_email,
t.name AS technician_name,
t.phone AS technician_phone
FROM jobs j
JOIN customers c ON j.customer_id = c.id
LEFT JOIN technicians t ON j.technician_id = t.id
WHERE j.status != 'complete'
ORDER BY j.scheduled_at ASC;
This is the query your n8n workflow will run when it needs to pull all the context for a job. One query, all the data, no multiple round-trips to separate tables.
Note the LEFT JOIN on technicians — LEFT JOIN returns the job row even if technician_id is NULL (unassigned jobs). A regular JOIN would exclude unassigned jobs entirely.
Step 6: Set Up the n8n Webhook
Before wiring up the Supabase database webhook, you need an n8n workflow ready to receive it.
In n8n:
- Create a new workflow and name it something like “Job Status Changed”
- Add a Webhook trigger node
- Set HTTP Method to POST
- Set Path to
job-status-changed - Set Response Mode to “Immediately”
- Click “Listen for Test Event” to open the test listener
Copy the test webhook URL — it looks like:
https://your-n8n-instance.n8n.cloud/webhook-test/job-status-changed
Keep n8n open. We’ll come back to build out the rest of the workflow after the database webhook fires its first test payload.
Step 7: Create the Supabase Database Webhook
In your Supabase dashboard, go to Database → Webhooks (sometimes listed under “Database” in the left sidebar).
Click “Create a new webhook.” Configure it:
- Name:
job_status_changed - Table:
jobs - Events: Check “UPDATE” only (we only want to fire when a job changes, not when one is created or deleted)
- Webhook URL: paste your n8n test webhook URL
- HTTP Method: POST
- HTTP Headers: leave empty for now
Save the webhook.
Now go back to Supabase and update a job status in the SQL Editor to trigger it:
UPDATE jobs SET status = 'enroute' WHERE status = 'assigned';
Switch back to n8n. The Webhook node should have received a payload. Click on the node to see the output. It will look like this:
{
"type": "UPDATE",
"table": "jobs",
"schema": "public",
"record": {
"id": "...",
"customer_id": "...",
"technician_id": "...",
"status": "enroute",
"address": "456 Oak Ave, Tampa FL",
"scheduled_at": "...",
"notes": "...",
"created_at": "...",
"updated_at": "..."
},
"old_record": {
"id": "...",
"customer_id": "...",
"technician_id": "...",
"status": "assigned",
...
}
}
You now have both the new state and the old state of the job. This is what makes database webhooks so powerful — you can detect exactly what changed, not just that something changed.
Step 8: Build the n8n Workflow Logic
Now build out the workflow that responds to the status change.
Add an IF node to filter by status:
Connect an IF node to the Webhook trigger. The condition:
- Value 1:
{{ $json.body.record.status }} - Operation: Equals
- Value 2:
enroute
This makes the workflow only take action when a job moves specifically to enroute. All other status updates pass through without triggering anything. You’ll duplicate this pattern for other statuses as you build out the full system.
Add a Supabase node to get full customer and technician data:
Connect a Supabase node to the “true” branch of the IF node.
- Operation: Execute Query
- Credential: your Supabase credential (set up in the Basics guide)
- Query:
SELECT
j.id AS job_id,
j.address,
j.notes,
c.name AS customer_name,
c.phone AS customer_phone,
t.name AS technician_name
FROM jobs j
JOIN customers c ON j.customer_id = c.id
LEFT JOIN technicians t ON j.technician_id = t.id
WHERE j.id = '{{ $('Webhook').item.json.body.record.id }}'
LIMIT 1;
This query pulls the full context for the specific job that just changed — using the job ID from the webhook payload. You now have the customer’s phone number, the technician’s name, and the job address, all in one object.
Add a Twilio node (or substitute with a log):
If you have Twilio configured, connect a Twilio node:
- Operation: Send SMS
- From: your Twilio number
- To:
{{ $json.customer_phone }} - Message:
Hi {{ $json.customer_name }}, your technician {{ $json.technician_name }} is on their way to {{ $json.address }}. You'll receive a message when they arrive.
If you’re not ready for Twilio yet, substitute a “No Operation” node or a simple email via Resend. The database and webhook portions are the same either way.
Step 9: Activate and Test End to End
Switch the Supabase webhook to the production n8n URL:
The test URL only works while the n8n editor is open. For production:
- In n8n, note the production URL from your Webhook node — same path, but
webhooknotwebhook-test - Go back to Supabase → Database → Webhooks → edit
job_status_changed - Update the URL to the production URL
- Save
Activate the n8n workflow:
In n8n, toggle the workflow from Inactive to Active.
Run the full test:
Reset your test job back to assigned:
UPDATE jobs SET status = 'assigned'
WHERE address = '456 Oak Ave, Tampa FL';
Then trigger the automation:
UPDATE jobs SET status = 'enroute'
WHERE address = '456 Oak Ave, Tampa FL';
In n8n, go to the workflow’s Executions tab. You should see a successful execution. Click into it and trace the data through each node — webhook received the payload, IF node evaluated true, Supabase query returned the customer and technician data, SMS node sent the message (or logged it).
If the execution shows an error, click the failed node to see the error message. Common issues and fixes are in the next section.
Common Issues and Fixes
Database webhook not reaching n8n:
Check that the n8n workflow is Active (not just listening in test mode). Test mode only works while the n8n editor is open with the node selected. Production mode requires the workflow toggle to be Active.
Also confirm the webhook URL in Supabase matches your n8n production URL exactly, including the path segment.
Supabase webhook fires but the IF node evaluates false:
Check the exact path to the status field in the webhook payload. Depending on how your Webhook node is configured in n8n, the path may be $json.body.record.status or $json.record.status. Click on the Webhook node output in n8n and expand the JSON to see the exact structure.
Supabase query returning no results:
Double-check the job ID expression. The expression {{ $('Webhook').item.json.body.record.id }} references the webhook’s data by node name. If your Webhook node has a different name, update the reference accordingly. You can always hardcode a known job UUID in the query to test the Supabase node independently from the webhook.
n8n Supabase node 403 error:
You’re using the anon key instead of the service role key in the Supabase credential. Go to n8n → Credentials → find your Supabase credential → verify the key is the long service_role key from Supabase Settings → API, not the shorter anon key.
Status CHECK constraint violation:
If you try to update a job to a status value not in the allowed list, you’ll get:
ERROR: new row for relation "jobs" violates check constraint "jobs_status_check"
Check the exact status string you’re using. Valid values: pending, assigned, enroute, onsite, complete, cancelled. All lowercase, exact spelling.
updated_at not changing after an update:
The trigger is probably not attached. Run this to confirm it exists:
SELECT trigger_name, event_manipulation, event_object_table
FROM information_schema.triggers
WHERE event_object_table = 'jobs';
If jobs_set_updated_at is not listed, go back to Step 3 and run that SQL block again.
What to Build Next
You now have a production-ready job tracking database with live automation. Here are the natural extensions:
Add the remaining status transitions. Duplicate the IF node approach for onsite (arrival notification), complete (payment link SMS), and cancelled (cancellation confirmation). Each transition is a new branch in the n8n workflow or a separate workflow listening on the same database webhook.
Add a message log table. Every SMS you send should be recorded. Create a message_log table with columns: job_id, customer_id, message_type, body, sent_at, delivery_status. After the Twilio node fires, add a Supabase insert node to log the message. Now you have a full audit trail of every communication for every job.
Add a dispatcher notification. When a new job is created (INSERT event on the jobs table), fire a separate database webhook → n8n workflow → send a Slack message or email to the dispatcher with the job details. The dispatcher sees every new job the moment it’s created, without checking any dashboard.
Build the dispatch assignment workflow. When a job moves from pending to assigned (technician is set), send the technician an SMS with the job details: customer name, address, scheduled time, and notes. The technician never needs to check a dashboard — the job comes to them.
Add a Supabase function for job summary. Write a database function that takes a job_id and returns the full job + customer + technician data in one call. Call it from n8n via RPC. This keeps your n8n expressions simple — one RPC call, one response object, all the data.
You’ve built the data foundation for a complete field service automation system. Everything that comes next — SMS notifications, payment links, dispatcher alerts — is automation logic that reads from and writes to this database.
Next: add the communication layer. What Is Twilio? →