Skip to content

Commit 1ca2518

Browse files
committed
Mardown doc improvements
1 parent 40c13bf commit 1ca2518

File tree

2 files changed

+8
-8
lines changed

2 files changed

+8
-8
lines changed

docs/bigquery-cdcTarget.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Google BigQuery Delta Target
1+
# Google BigQuery Replication Target
22

33
Description
44
-----------
@@ -9,7 +9,7 @@ table using a BigQuery merge query.
99

1010
The final target tables will include all the original columns from the source table plus one additional
1111
_sequence_num column. The sequence number is used to ensure that data is not duplicated or missed in
12-
replicator failure scenarios.
12+
replication job failure scenarios.
1313

1414
Credentials
1515
-----------
@@ -46,9 +46,9 @@ https://cloud.google.com/bigquery/docs/locations. This value is ignored if an ex
4646
staging bucket and the BigQuery dataset will be created in the same location as that bucket.
4747

4848
**Staging Bucket**: GCS bucket to write change events to before loading them into staging tables.
49-
Changes are written to a directory that contains the replicator name and namespace. It is safe to use
50-
the same bucket across multiple replicators within the same instance. If it is shared by replicators across
51-
multiple instances, ensure that the namespace and name are unique, otherwise the behavior is undefined.
49+
Changes are written to a directory that contains the replication job name and namespace. It is safe to use
50+
the same bucket across multiple replication jobs within the same instance. If it is shared by replication jobs
51+
across multiple instances, ensure that the namespace and name are unique, otherwise the behavior is undefined.
5252
The bucket must be in the same location as the BigQuery dataset. If not provided, new bucket will be created for
5353
each pipeline named as 'df-rbq-<namespace-name>-<pipeline-name>-<deployment-timestamp>'. Note that user
5454
will have to explicitly delete the bucket once the pipeline is deleted.
@@ -63,7 +63,7 @@ of the cluster.
6363
Staging tables names are generated by prepending this prefix to the target table name.
6464

6565
**Require Manual Drop Intervention**: Whether to require manual administrative action to drop tables and
66-
datasets when a drop table or drop database event is encountered. When set to true, the replicator will
66+
datasets when a drop table or drop database event is encountered. When set to true, the replication job will
6767
not delete a table or dataset. Instead, it will fail and retry until the table or dataset does not exist.
6868
If the dataset or table does not already exist, no manual intervention is required. The event will be
6969
skipped as normal.

src/main/java/io/cdap/delta/bigquery/BigQueryTarget.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -185,8 +185,8 @@ private static String stringifyPipelineId(DeltaPipelineId pipelineId) {
185185
public static class Conf extends PluginConfig {
186186

187187
@Nullable
188-
@Description("Project of the BigQuery dataset. When running on a Google Cloud VM, this can be set to "
189-
+ "'auto-detect', which will use the project of the VM.")
188+
@Description("Project of the BigQuery dataset. When running on a Dataproc cluster, this can be set to "
189+
+ "'auto-detect', which will use the project of the cluster.")
190190
private String project;
191191

192192
@Macro

0 commit comments

Comments
 (0)