You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/api-reference/sql-api.md
+45-26Lines changed: 45 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -188,7 +188,12 @@ If you detect a truncated response, treat it as an error.
188
188
189
189
#### Sample request
190
190
191
-
The following example retrieves all rows in the `wikipedia` datasource where the `user` is `BlueMoon2662`. The query is assigned the ID `request01` using the `sqlQueryId` context parameter. The optional properties `header`, `typesHeader`, and `sqlTypesHeader` are set to `true` to include type information to the response.
191
+
In the following example, this query demonstrates the following actions:
192
+
- Retrieves all rows from the `wikipedia` datasource.
193
+
- Filters the results where the `user` value is `BlueMoon2662`.
194
+
- Applies the `sqlTimeZone` context parameter to set the time zone of results to `America/Los_Angeles`.
195
+
- Returns descriptors for `header`, `typesHeader`, and `sqlTypesHeader`.
You can also specify query-level context parameters directly within the SQL query string using the `SET` command. For more details, see [SET statements](../querying/sql.md#set-statements).
238
+
239
+
The following request body is functionally equivalent to the previous example and uses SET instead of the `context` parameter:
240
+
241
+
```JSON
242
+
{
243
+
"query": "SET sqlTimeZone='America/Los_Angeles'; SELECT * FROM wikipedia WHERE user='BlueMoon2662'",
Copy file name to clipboardExpand all lines: docs/api-reference/sql-ingestion-api.md
+12-17Lines changed: 12 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -101,22 +101,23 @@ The `/druid/v2/sql/task` endpoint accepts the following:
101
101
### Sample request
102
102
103
103
The following example shows a query that fetches data from an external JSON source and inserts it into a table named `wikipedia`.
104
+
The example specifies two query context parameters:
105
+
106
+
-`maxNumTasks=3`: Limits the maximum number of parallel tasks to 3.
107
+
-`finalizeAggregations=false`: Ensures that Druid saves the aggregation's intermediate type during ingestion. For more information, see [Rollup](../multi-stage-query/concepts.md#rollup).
Copy file name to clipboardExpand all lines: docs/multi-stage-query/examples.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ sidebar_label: Examples
32
32
These example queries show you some of the things you can do when modifying queries for your use case. Copy the example queries into the **Query** view of the web console and run them to see what they do.
33
33
34
34
:::tip
35
-
When you insert or replace data with SQL-based ingestion, set the context parameter `finalizeAggregations` to `false`. This context parameter is automatically set for you if you use the Druid console. If you use the API, you must explicitly set it. For more information, see [Rollup](./concepts.md#rollup).
35
+
When you insert or replace data with SQL-based ingestion, set the context parameter `finalizeAggregations` to `false`. This context parameter is automatically set for you if you use the Druid console. If you use the API, you must explicitly set it. For an example, see [SQL-based ingestion API](../api-reference/sql-ingestion-api#sample-request). For details on aggregations, see [Rollup](./concepts.md#rollup).
Keep the following in mind when using EXTERN to export rows:
112
112
- Only INSERT statements are supported.
113
113
- Only `CSV` format is supported as an export format.
114
-
- Partitioning (`PARTITIONED BY`) and clustering (`CLUSTERED BY`) aren't supported with EXTERN statements.
114
+
- Partitioning (PARTITIONED BY) and clustering (CLUSTERED BY) aren't supported with EXTERN statements.
115
115
- You can export to Amazon S3, Google GCS, or local storage.
116
116
- The destination provided should contain no other files or directories.
117
117
118
118
When you export data, use the `rowsPerPage` context parameter to restrict the size of exported files.
119
119
When the number of rows in the result set exceeds the value of the parameter, Druid splits the output into multiple files.
120
+
The following statement shows the format of a SQL query using EXTERN to export rows:
120
121
121
122
```sql
123
+
SET rowsPerPage=<number_of_rows>;
122
124
INSERT INTO
123
125
EXTERN(<destination function>)
124
126
AS CSV
@@ -127,6 +129,9 @@ SELECT
127
129
FROM<table>
128
130
```
129
131
132
+
For details on applying context parameters using SET, see [SET statements](../querying/sql.md#set-statements).
133
+
134
+
130
135
##### S3 - Amazon S3
131
136
132
137
To export results to S3, pass the `s3()` function as an argument to the `EXTERN` function.
@@ -501,10 +506,15 @@ When using the sort-merge algorithm, keep the following in mind:
501
506
502
507
- All join types are supported with `sortMerge`: LEFT, RIGHT, INNER, FULL, and CROSS.
503
508
504
-
The following example runs using a single sort-merge join stage that receives `eventstream`
505
-
(partitioned on `user_id`) and `users` (partitioned on `id`) as inputs. There is no limit on the size of either input.
509
+
The following query runs a single sort-merge join stage that takes the following inputs:
510
+
*`eventstream` partitioned on `user_id`
511
+
*`users` partitioned on `id`
512
+
513
+
There is no limit on the size of either input.
514
+
The SET command assigns the `sqlJoinAlgorithm` context parameter so that Druid uses the sort-merge join algorithm for the query.
506
515
507
516
```sql
517
+
SET sqlJoinAlgorithm='sortMerge';
508
518
REPLACE INTO eventstream_enriched
509
519
OVERWRITE ALL
510
520
SELECT
@@ -519,8 +529,6 @@ PARTITIONED BY HOUR
519
529
CLUSTERED BY user
520
530
```
521
531
522
-
The context parameter that sets `sqlJoinAlgorithm` to `sortMerge` is not shown in the above example.
523
-
524
532
## Durable storage
525
533
526
534
SQL-based ingestion supports using durable storage to store intermediate files temporarily. Enabling it can improve reliability. For more information, see [Durable storage](../operations/durable-storage.md).
Note that you can also submit context parameters using [SET](../querying/sql.md#set-statements). For example:
117
+
118
+
```
119
+
"query": "SET executionMode = '\''ASYNC'\''; SET selectDestination = '\''durableStorage'\''; SELECT * FROM \"YOUR_DATASOURCE\" WHERE \"__time\" > TIMESTAMP '\''2017-09-01'\'' AND \"__time\" <= TIMESTAMP '\''2017-09-02'\''"
120
+
```
121
+
116
122
The response for submitting a query includes the query ID along with basic information, such as when you submitted the query and the schema of the results:
117
123
118
124
```json
@@ -132,7 +138,6 @@ The response for submitting a query includes the query ID along with basic infor
132
138
}
133
139
```
134
140
135
-
136
141
### Get query status
137
142
138
143
You can check the status of a query with the following API call:
Copy file name to clipboardExpand all lines: docs/querying/using-caching.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,9 +83,10 @@ As long as the service is set to populate the cache, you can set cache options f
83
83
}
84
84
}
85
85
```
86
-
In this example the user has set `populateCache` to `false` to avoid filling the result cache with results for segments that are over a year old. For more information, see [Druid SQL client APIs](../api-reference/sql-api.md).
87
86
87
+
In this example the user has set `populateCache` to `false` to avoid filling the result cache with results for segments that are over a year old. For more information, see [Druid SQL client APIs](../api-reference/sql-api.md).
88
88
89
+
You can also use the SET command to specify cache options directly within your SQL query string. For more information, see [SET statements](../querying/sql.md#set-statements).
0 commit comments