1 product, 1 bill.
EU-hosted (Frankfurt).
4 vendors, 4 bills.
Snowflake Inc. (US).
TL;DR
- Pick rawquery if you want one product covering ingestion, storage, transforms, and serving, EU-hosted, with flat pricing, and your analytical data is under ~10 TB.
- Pick Snowflake if you have petabyte-scale workloads, thousands of concurrent users, enterprise governance needs (masking policies, row-access), or you already run on the Snowflake ecosystem (Snowpark, Cortex, Streamlit, marketplace datasets).
Feature-by-feature
| Dimension | rawquery | Snowflake |
|---|---|---|
| Pricing | Flat monthly | Per-second compute + per-TB storage |
| Jurisdiction | EU-hosted (Frankfurt) | US (Snowflake Inc.) |
| Storage format | Apache Iceberg (open) | Proprietary FDN; Iceberg opt-in |
| Ingestion | Built-in + JSON spec | Separate vendor (Fivetran, ...) |
| Transforms | Built-in SQL DAG | Separate (dbt) |
| BI connectivity | Postgres wire protocol | JDBC / ODBC / native drivers |
| Compute | DuckDB + worker pool | Virtual warehouses, auto-scale |
| Scale ceiling | Under ~10 TB | Multi-petabyte |
| Governance | Workspace RBAC + audit | Enterprise (masking, row-access) |
| Certifications | GDPR-native | SOC 2 / HIPAA / PCI / FedRAMP |
The pricing reality
Snowflake charges by the credit, and credits map to virtual warehouse compute time (per-second, with a 60-second minimum). Storage is billed separately per TB. The list price for Enterprise starts around $3 per credit; a single user running ad-hoc queries on a MEDIUM warehouse can rack up hundreds of credits per month without noticing. Fivetran and dbt Cloud are additional line items on top of that.
rawquery has a flat monthly price, listed on the website, that includes ingestion, storage, compute, and serving. There is no per-query, per-credit, or per-seat variable. If you run too many queries, the plan caps apply. The bill does not move.
Our honest rule of thumb: if your current Snowflake + Fivetran + dbt Cloud spend is under $50k / year, the combined stack is overkill and you are subsidizing unused scale. Above that, the Snowflake ecosystem starts to earn its keep.
Beyond the EU region checkbox
Snowflake offers EU regions (Frankfurt, Dublin, Stockholm, Paris). The data sits in the EU. The parent company does not. Snowflake Inc. is a Delaware corporation, and data stored with a US-incorporated provider remains reachable under the US CLOUD Act regardless of where the bits physically live. For some teams this is a paperwork concern. For public-sector, healthcare, and finance it is a compliance blocker.
rawquery is EU-incorporated and EU-hosted in Frankfurt. No US entity in the chain of custody.
Open format vs proprietary
Snowflake stores data in its proprietary FDN format by default. They added Apache Iceberg support in 2024 as an opt-in external table type, an alternative to the default path. With rawquery, Iceberg is the only path. Every table is an Iceberg table on S3. You can read your data from Spark, Trino, Flink, DuckDB, or anything else that speaks Iceberg. If you leave, you leave with everything intact. Your data is yours, in a format you do not need our permission to read.
Where Snowflake still wins
- Scale. rawquery is built for teams under ~10 TB. Snowflake routinely serves petabyte-scale workloads with predictable latency. If you are there, you are not our customer.
- Concurrency. Multi-cluster warehouses handle thousands of concurrent queries. Our worker pool handles hundreds in its current shape.
- Ecosystem. Snowpark, Cortex, Streamlit-in-Snowflake, native apps, data marketplace. We have none of these and are not building all of them.
- Governance. Data masking policies, row-access policies, object tagging, auditing. Snowflake's enterprise features are years ahead.
- Certifications. SOC 2 Type II, HIPAA, PCI, FedRAMP, ISO 27001. We are GDPR-native; the rest are on the roadmap.
Try rawquery
Three commands from zero to querying.
# Install the CLIcurl -sSL rawquery.dev/install.sh | sh
# Connect a Stripe source (or Postgres, HubSpot, Shopify, ...)rq connections create my-stripe --type stripe -p api_key=sk_live_xxxrq connections sync my-stripe
# Query itrq query "SELECT status, count(*) FROM my_stripe.subscriptions GROUP BY 1"# Install the CLIcurl -sSL rawquery.dev/install.sh | sh
# Connect a Stripe source (or Postgres, HubSpot, Shopify, ...)rq connections create my-stripe --type stripe -p api_key=sk_live_xxxrq connections sync my-stripe
# Query itrq query "SELECT status, count(*) FROM my_stripe.subscriptions GROUP BY 1"