-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supabase Self-hosted: Getting Api Gateway logs in BigQuery failes with error 'Name metadata not found inside t at' #2245
Comments
Did you figure this out? When you start sending some logs the metadata column should be created and then it should work. The query you pasted into BigQuery here is referencing the table aliases only Logflare knows about e.g. "edge_logs" and that query has to get ran through a Logflare Endpoint to work. |
We are using supabase self-hosted, so everything related to logging is going via a logflare container. The issue is: when the BQ has no data, then certain columns are not created, and so a query that references those columns throws errors. Let me know if you need more infos |
If all services are sending logs the metadata column should be created. If a service is not sending metadata with logs, then you should just be able to create it manually in BigQuery and queries should work. |
Not all services that are referenced in the BQ SQL are deployed or run when running self-hosted. For example, we don't have pgbouncer logs. Similarly functions is not yet supported (hence not run) on self hosted. But the SQL assumes there are lots for them. This assumption is incorrect. And while I could go and create these columns, that's not ideal because I don't want to do that for each BQ project. Logflare sends a very large SQL statement that references many supabase services (and hence many BQ tables), but in the end only queries one service. But from BQ's point of view, the entire SQL must be valid for all tables. But this isn't the case because some services don't log anything. Ideally if we query for edge_logs (kong logs) we should only ensure that the SQL references that table only. |
Which ones are you using exactly? You can also adjust the SQL in the Endpoint to remove any services you are not using. Are you deploying lots of self hosted Supabase instances? |
@srasul can you paste in your docker compose setup? |
We are using Supabase Self-hosted with BigQuery as the 'sink' for our logs. When trying to fetch the logs in Supabase Studio, we get the following error:
Analyzing the query that is sent to BQ, we see that in the query we are selecting
t.metadata
from tables.But when I look at the data in some of the tables, I notice we have no data in these tables, so the only columns in the tables are:
timestamp
id
event_message
So therefore, we are trying to query for a column that does not exist. And it see that even though this big-query SQL has the SQL to query for all the tables, only one table is actually queried. But since we send the SQL for all tables, this means that the SQL has to be valid for all the tables.
Can you advise how we can proceed with this? or steps we can take to get these queries in BQ to run without any errors?
The text was updated successfully, but these errors were encountered: