Skip to content

Commit fedbbea

Browse files
author
Jacob Ferriero
authored
chore: Add flake8 static check (#449)
* remove trailing whitespace * add flake8 static check to check_format.sh and Dockerfile * fix flake8 issues tools/slo-generator * fix flake8 issues tools/quota-manager * ignore line length * remove trailing whitespace * fix tools * fix examples * remove max line length as it's later ignored
1 parent 6318437 commit fedbbea

File tree

241 files changed

+28169
-28144
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

241 files changed

+28169
-28144
lines changed

cloudbuild/Dockerfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ RUN apt-get update && apt-get install -y build-essential
1010
# install yapf
1111
RUN pip install yapf
1212
RUN pip3 install yapf
13+
RUN pip3 install flake8
1314

1415
# install golang (+gofmt)
1516
RUN apt-get install -y golang

examples/alert-absence-dedup/policy_doc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,6 @@ sources are monitored by this alert.
1818
data ingestion.
1919

2020
### Case 2: At least 1 time-series absent
21-
1. The time series that has gone absent should be able to be identified with
21+
1. The time series that has gone absent should be able to be identified with
2222
by the chart for metric: ${metric.display_name}.
2323
2. Investigate the specific data sources. The resource type is: ${resource.type}

examples/bigquery-audit-log/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,14 +23,14 @@ A short description relevant to our use case is presented below -
2323
1. In the GCP Cloud Console select the project you want to export the logs to. Go to Stackdriver --> Logging --> Exports.
2424
2. Click on Create Export. Select the following in the drop down menu: "BigQuery", "All logs", "Any log level", "No limit" and "Jump to now" respectively.
2525
3. In the configuration windows on the right side of the screen, enter a Sink Name of your choice. Select BigQuery as Sink Service. Select the "BigQuery Audit" (refer to Prerequisites) dataset as the Sink Destination.
26-
4. Click on Create Sink.
26+
4. Click on Create Sink.
2727
5. A message box pops up to notify you of successful creation. Click on Close.
2828
6. Click on the Play button located on the top bar to start the export.
2929

3030
### 2. Scheduling a BigQuery job
31-
Use the SQL script in the file bigquery_audit_log.sql (located in this GitHub folder) to create a scheduled query in BigQuery. Click [here](https://cloud.google.com/bigquery/docs/scheduling-queries) for instructions on how to create scheduled queries.
31+
Use the SQL script in the file bigquery_audit_log.sql (located in this GitHub folder) to create a scheduled query in BigQuery. Click [here](https://cloud.google.com/bigquery/docs/scheduling-queries) for instructions on how to create scheduled queries.
3232

33-
Create a materialized table that stores data from the scheduled query.
33+
Create a materialized table that stores data from the scheduled query.
3434
You can give it a custom name, we will be referring to it as **bigquery_audit_log**.
3535

3636
### 3. Copying the data source in Data Studio

examples/bigquery-audit-log/bigquery_audit_log.sql

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ WITH BQAudit AS (
4242
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.startTime, SECOND)) / 60) AS INT64)
4343
AS executionMinuteBuckets,
4444
IF(COALESCE(protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.totalProcessedBytes,
45-
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.totalSlotMs,
45+
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.totalSlotMs,
4646
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatus.error.code) IS NULL, TRUE, FALSE
4747
) AS isCached,
4848
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.totalSlotMs,
@@ -60,8 +60,8 @@ WITH BQAudit AS (
6060
WHERE
6161
protopayload_auditlog.serviceName = 'bigquery.googleapis.com'
6262
AND protopayload_auditlog.methodName = 'jobservice.jobcompleted'
63-
AND protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.eventName IN
64-
(
63+
AND protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.eventName IN
64+
(
6565
'table_copy_job_completed',
6666
'query_job_completed',
6767
'extract_job_completed',

examples/bigquery-audit-log/docs/query_jobs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The Selection Bar allows the user to filter the data in the report to a specific
1010
![Selection Bar](../images/query_jobs/Image1.png)
1111

1212
### No. of Queries (this week vs. last week) - per day of the week
13-
The graph displays the total number of queries run per day of the week for the current week, also displaying a contrast with the same data for the previous week.
13+
The graph displays the total number of queries run per day of the week for the current week, also displaying a contrast with the same data for the previous week.
1414

1515
![No. of Queries (this week vs. last week) - per day of the week](../images/query_jobs/Image2.png)
1616

examples/bigquery-billing-dashboard/bigquery_billing_export.sql

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@
44
* Description:
55
* This SQL script transforms the billing export table to anonymize user data
66
* and included a linear projection for daily running cost. The output table
7-
* powers the Cloud Billing Dashboard
7+
* powers the Cloud Billing Dashboard
88
* (https://cloud.google.com/billing/docs/how-to/visualize-data).
99
*/
1010

11-
WITH
11+
WITH
1212
-- Generate dates in the current month.
1313
current_month_dates AS (
1414
SELECT gen_date
@@ -17,8 +17,8 @@ current_month_dates AS (
1717
GENERATE_DATE_ARRAY(
1818
DATE_TRUNC(CURRENT_DATE(), MONTH),
1919
DATE_SUB(DATE_TRUNC(
20-
DATE_ADD(CURRENT_DATE(), INTERVAL 1 MONTH), MONTH),
21-
INTERVAL 1 DAY),
20+
DATE_ADD(CURRENT_DATE(), INTERVAL 1 MONTH), MONTH),
21+
INTERVAL 1 DAY),
2222
INTERVAL 1 DAY)
2323
) AS gen_date),
2424

@@ -41,11 +41,11 @@ avg_daily_cost AS (
4141

4242
-- Calculate projected_running_cost
4343
projected_cost AS (
44-
SELECT
44+
SELECT
4545
daily_cost.gen_date AS date,
4646
daily_cost.cost AS daily_cost,
4747
avg_daily_cost.cost AS avg_daily_cost,
48-
(DATE_DIFF(daily_cost.gen_date, DATE_TRUNC(CURRENT_DATE, MONTH), DAY) + 1) *
48+
(DATE_DIFF(daily_cost.gen_date, DATE_TRUNC(CURRENT_DATE, MONTH), DAY) + 1) *
4949
avg_daily_cost.cost AS projected_running_cost
5050
FROM daily_cost
5151
CROSS JOIN avg_daily_cost)

examples/bigquery-cross-project-slot-monitoring/tests/main_test.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ def testCopyMetrics_NotCalledByCloudTasks(self):
124124

125125
def testCopyMetrics_MissingParameters(self):
126126
self.app.get(
127-
'/CopyMetrics',
127+
'/CopyMetrics',
128128
headers={'X-AppEngine-QueueName': 'SomeQueue'},
129129
status=400)
130130

examples/bigquery-row-access-groups/auth_util.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,8 @@
2929

3030
def get_credentials(admin_email, scopes):
3131
request = google.auth.transport.requests.Request()
32-
# This retrieves the default credentials from the environment - in this
33-
# case, for the Service Account attached to the VM. The unused _ variable
32+
# This retrieves the default credentials from the environment - in this
33+
# case, for the Service Account attached to the VM. The unused _ variable
3434
# is just the GCP project ID - we're dropping it because we don't care.
3535
default_credentials, _ = google.auth.default()
3636
# The credentials object won't include the service account e-mail address

examples/bigtable-change-key/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
## Dataflow pipeline to change the key of a Bigtable
22

33
For an optimal performance of our requests to a Bigtable instance, [it is crucial to choose
4-
a good key for our records](https://cloud.google.com/bigtable/docs/schema-design),
5-
so that both read and writes are evenly distributed across the keys space. Although we have tools
6-
such as [Key Visualizer](https://cloud.google.com/bigtable/docs/keyvis-overview), to diagnose how
7-
our key is performing, it is not obvious how to change or update a key for all the records in a table.
4+
a good key for our records](https://cloud.google.com/bigtable/docs/schema-design),
5+
so that both read and writes are evenly distributed across the keys space. Although we have tools
6+
such as [Key Visualizer](https://cloud.google.com/bigtable/docs/keyvis-overview), to diagnose how
7+
our key is performing, it is not obvious how to change or update a key for all the records in a table.
88

99
This example contains a Dataflow pipeline to read data from a table in a
1010
Bigtable instance, and to write the same records to another table with the same
@@ -20,7 +20,7 @@ The build process is managed using Maven. To compile, just run
2020

2121
To create a package for the pipeline, run
2222

23-
`mvn package`
23+
`mvn package`
2424

2525
### Setup `cbt`
2626

@@ -144,7 +144,7 @@ You should see a job with a simple graph, similar to this one:
144144

145145
You can now check that the destination table has the same records as the input
146146
table, and that the key has changed. You can use `cbt count` and `cbt read` for
147-
that purpose, by comparing with the results of the original table.
147+
that purpose, by comparing with the results of the original table.
148148

149149
### Change the update key function
150150

@@ -176,11 +176,11 @@ It is only provided as an example so it is easier to write your own function.
176176
```
177177

178178
The function has two input parameters:
179-
179+
180180
* `key`: the current key of the record
181181
* `record`: the full record, with all the column families, columns,
182182
values/cells, versions of cells, etc.
183-
183+
184184
The `record` is of type
185185
[com.google.bigtable.v2.Row](http://googleapis.github.io/googleapis/java/all/latest/apidocs/com/google/bigtable/v2/Row.html).
186186
You can traverse the record to recover all the elements. See [an example of how

examples/bigtable-change-key/scripts/copy_schema_to_new_table.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1414
# See the License for the specific language governing permissions and
1515
# limitations under the License.
16-
16+
1717
usage() {
1818
echo "Usage $0 INPUT_TABLE_NAME OUTPUT_TABLE_NAME"
1919
echo
@@ -25,7 +25,7 @@ then
2525
usage
2626
exit 0
2727
fi
28-
28+
2929

3030
if [ $# -lt 2 ]
3131
then

0 commit comments

Comments
 (0)