1
- # Google BigQuery Delta Target
1
+ # Google BigQuery Replication Target
2
2
3
3
Description
4
4
-----------
@@ -9,7 +9,7 @@ table using a BigQuery merge query.
9
9
10
10
The final target tables will include all the original columns from the source table plus one additional
11
11
_ sequence_num column. The sequence number is used to ensure that data is not duplicated or missed in
12
- replicator failure scenarios.
12
+ replication job failure scenarios.
13
13
14
14
Credentials
15
15
-----------
@@ -46,9 +46,9 @@ https://cloud.google.com/bigquery/docs/locations. This value is ignored if an ex
46
46
staging bucket and the BigQuery dataset will be created in the same location as that bucket.
47
47
48
48
** Staging Bucket** : GCS bucket to write change events to before loading them into staging tables.
49
- Changes are written to a directory that contains the replicator name and namespace. It is safe to use
50
- the same bucket across multiple replicators within the same instance. If it is shared by replicators across
51
- multiple instances, ensure that the namespace and name are unique, otherwise the behavior is undefined.
49
+ Changes are written to a directory that contains the replication job name and namespace. It is safe to use
50
+ the same bucket across multiple replication jobs within the same instance. If it is shared by replication jobs
51
+ across multiple instances, ensure that the namespace and name are unique, otherwise the behavior is undefined.
52
52
The bucket must be in the same location as the BigQuery dataset. If not provided, new bucket will be created for
53
53
each pipeline named as 'df-rbq-<namespace-name >-<pipeline-name >-<deployment-timestamp >'. Note that user
54
54
will have to explicitly delete the bucket once the pipeline is deleted.
@@ -63,7 +63,7 @@ of the cluster.
63
63
Staging tables names are generated by prepending this prefix to the target table name.
64
64
65
65
** Require Manual Drop Intervention** : Whether to require manual administrative action to drop tables and
66
- datasets when a drop table or drop database event is encountered. When set to true, the replicator will
66
+ datasets when a drop table or drop database event is encountered. When set to true, the replication job will
67
67
not delete a table or dataset. Instead, it will fail and retry until the table or dataset does not exist.
68
68
If the dataset or table does not already exist, no manual intervention is required. The event will be
69
69
skipped as normal.
0 commit comments