PostgREST turns your PostgreSQL database automatically into a RESTful API. Today, PostgREST v10 was released. v10 is not available on the Supabase Platform yet, but it is available for self-hosting or as a executable binary from the GitHub release page.
Let's take a look at some of its new features that go hand in hand with supabase-js v2.
EXPLAIN
Akin to the PostgreSQL EXPLAIN command, you can now get the execution plan of a request through HTTP or supabase-js
. This works transparently for reads, writes and RPC because every request to PostgREST generates a single SQL query.
This is only suitable for development environments and is only enabled when the db-plan-enabled
config is set to true
.
Getting the execution plan through HTTP
Using curl
, you can obtain the execution plan by specifying a vendor media type on the Accept
header.
$ curl -H 'Accept: application/vnd.pgrst.plan' \
'https://<project>.supabase.co/rest/v1/clients?select=*&id=eq.1'
Aggregate (cost=8.18..8.20 rows=1 width=112)
-> Index Scan using clients_pkey on clients (cost=0.15..8.17 rows=1 width=36)
Index Cond: (id = 1)
The text
format is used by default, which gives you the same output you’d get in other SQL clients like psql
. You can change the format to JSON by using a media type suffix application/vnd.pgrst.plan+json
.
Explaining supabase-js queries
For supabase-js
, you can get the execution plan using the explain()
transform. This works for every supabase-js
method, including rpc()
. Here’s an example for select()
.
const { data, error } = await supabase
.from('projects')
.select('*')
.eq('id', 1)
.explain()
console.log(data)
Aggregate (cost=8.18..8.20 rows=1 width=112)
-> Index Scan using projects_pkey on projects (cost=0.15..8.17 rows=1 width=40)
Index Cond: (id = 1)
Explaining the plan output in detail is beyond the scope of this blog post but basically it’s a tree of operations that PostgreSQL will follow for executing a query. Here we see the “Aggregate” node which corresponds to the json_agg
function used by the PostgREST generated query(more details on how to find these later) and the “Index Scan” node which means an index on “id” was used for a fast search.
Explaining RLS policies
The raison d'etre of explain()
is to provide quicker feedback on the performance of queries, especially in the presence of RLS policies. For instance, let’s say we have this basic policy:
create policy "anon can't read"
on projects for select to anon
using (
false
);
And we use explain()
again, this time with the analyze
option(executes the query, same as the EXPLAIN ANALYZE counterpart in SQL) so we can see the execution time.
const { data, error } = await supabase
.from('projects')
.select('*')
.eq('id', 1)
.explain({ analyze: true })
console.log(data)
Aggregate (cost=8.18..8.20 rows=1 width=112) (actual time=0.017..0.018 rows=1 loops=1)
-> Index Scan using projects_pkey on projects (cost=0.15..8.17 rows=1 width=40) (actual time=0.012..0.012 rows=0 loops=1)
Index Cond: (id = 1)
Filter: false
Rows Removed by Filter: 1
Planning Time: 0.092 ms
Execution Time: 0.046 ms
Here you can see the “Filter” node, which is a simple false
as defined by the above policy, this proves that the RLS policy is getting applied. Also the actual “Execution Time” is shown, which is a fraction of a millisecond. Note that this is only the query execution time, it doesn’t account for the latency for transferring the data from the database to the frontend.
Getting the Query Identifier
explain()
is also useful for getting the pg_stat_statements query identifier, which you can use in the Supabase logs to obtain the generated SQL queries. Here we use the json format
and the verbose
option to get it.
const { data, error } = await supabase
.from('projects')
.select('*')
.eq('id', 1)
.explain({ format: 'json', verbose: true })
console.log(data[0]['Query Identifier'])
// 2811722635570756600
For getting more detailed information, you can also use the settings
, buffers
, wal
options with explain()
.
Improved Relationship Detection
One-to-one relationships
To avoid unnecessary JSON arrays in a query result, one-to-one relationships are now automatically detected. For this you can use a:
-- A unique constraint on a foreign key
create table country(
id serial primary key
, name text
);
create table capital(
id serial primary key
, name text
, country_id int unique
, foreign key (country_id) references country(id)
);
-- or a primary key on a foreign key
create table country(
id serial primary key
, name text
);
create table capital(
id serial primary key
, name text
, foreign key (id) references country(id)
);
Both options should give you a json object when embedding one table with the other.
const { data, error } = await supabase
.from('country')
.select('name,capital(name)')
.in('id', [1, 2])
console.log(data)
[
{ "name": "Afghanistan", "capital": { "name": "Kabul" } },
{ "name": "Algeria", "capital": { "name": "Algiers" } }
]
Computed relationships
PostgREST uses foreign keys to detect relationships. This poses a problem on database objects that cannot have foreign keys, like views. Though PostgREST tries to infer relationships based on the views’ source tables foreign keys(docs), this is not infallible - in particular, it fails when views have a complex definition (e.g. multiple UNIONs). For this you can use “computed relationships”, which are “inlinable” SQL functions similar to computed columns.
Let’s assume we have a players
view, a scores
materialized view and we want to define a one-to-many relationship on them.
create view players as
select id, name from players_a
union
select id, name from players_b;
create materialized view scores as
select
name as lvl_name,
compute_score(stats) as total, player_id
from level_1;
union
select
name as lvl_name,
compute_score(stats) as total,
player_id
from level_2;
For this we can define a couple of computed relationships.
-- many-to-one relationship on scores -> players
create function player(scores)
returns setof players rows 1 -- rows 1 defines a "one" end
language sql stable
as $$
select *
from players
where id = $1.player_id;
$$;
-- one-to-many relationship on players -> scores
create function scores(players)
returns setof scores -- there's an implicit rows 1000 here, which is assumed to be "many"
language sql stable
as $$
select *
from scores
where player_id = $1.id;
$$;
And now we can embed both views from one end to the other. Note that the function names are arbitrary, here we named them similar to the views for convenience.
const { data, error } = await supabase
.from('scores')
.select('lvl_name, player(name)')
.eq('lvl_name', "Grand Prix 1")
.single()
console.log(data)
{
"lvl_name": "Grand Prix 1",
"player": { "name": "Ben Richards"}
}
const { data, error } = await supabase
.from('players')
.select('name,scores(lvl_name, total)')
.eq('id', 1)
.single()
console.log(data)
{
"name":"Ben Richards",
"scores":[
{"lvl_name": "Grand Prix 1", "total": 48761.24},
{"lvl_name": "Grand Prix 2", "total": -40.25}
]
}
Computed relationships follow the rules of Inlining of SQL Functions, which basically allows them to be injected into PostgREST generated queries, making them efficient to use. You can also use computed relationships to override detected relationships.
Breaking change on many-to-many relationships
Detecting join tables for many-to-many relationships has been working for many releases. However on complex schemas join tables can be incorrectly detected, causing errors when used in resource embedding. For this the following BREAKING CHANGE had to be made:
-- for "books_authors" to be detected as a join table,
-- the primary key must include the foreign key columns
-- of the many-to-many ends
create table books(
id int primary key
, name text
);
create table books_authors(
book_id int references books(id)
, author_id int references authors(id)
, primary key(book_id, author_id) -- this is now necessary
);
create table authors(
id int primary key
, name text
);
If changing the PK is not feasible for a particular case, the alternative would be using computed relationships to define the many-to-many.
One-to-many and many-to-one relationships keep working as always, no change in their detection.
Closing up
Computed relationships are the first step towards PostgREST extensibility, customizing the aggregate used for the response and custom operators are planned for next releases.
We’ll release PostgREST 10 on the Supabase platform over the next month.