Skip to content

Commit

Permalink
chore: Add flake8 static check (#449)
Browse files Browse the repository at this point in the history
* remove trailing whitespace

* add flake8 static check to check_format.sh and Dockerfile

* fix flake8 issues tools/slo-generator

* fix flake8 issues tools/quota-manager

* ignore line length

* remove trailing whitespace

* fix tools

* fix examples

* remove max line length as it's later ignored
  • Loading branch information
Jacob Ferriero authored Apr 21, 2020
1 parent 6318437 commit fedbbea
Show file tree
Hide file tree
Showing 241 changed files with 28,169 additions and 28,144 deletions.
1 change: 1 addition & 0 deletions cloudbuild/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ RUN apt-get update && apt-get install -y build-essential
# install yapf
RUN pip install yapf
RUN pip3 install yapf
RUN pip3 install flake8

# install golang (+gofmt)
RUN apt-get install -y golang
Expand Down
2 changes: 1 addition & 1 deletion examples/alert-absence-dedup/policy_doc.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,6 @@ sources are monitored by this alert.
data ingestion.

### Case 2: At least 1 time-series absent
1. The time series that has gone absent should be able to be identified with
1. The time series that has gone absent should be able to be identified with
by the chart for metric: ${metric.display_name}.
2. Investigate the specific data sources. The resource type is: ${resource.type}
6 changes: 3 additions & 3 deletions examples/bigquery-audit-log/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,14 @@ A short description relevant to our use case is presented below -
1. In the GCP Cloud Console select the project you want to export the logs to. Go to Stackdriver --> Logging --> Exports.
2. Click on Create Export. Select the following in the drop down menu: "BigQuery", "All logs", "Any log level", "No limit" and "Jump to now" respectively.
3. In the configuration windows on the right side of the screen, enter a Sink Name of your choice. Select BigQuery as Sink Service. Select the "BigQuery Audit" (refer to Prerequisites) dataset as the Sink Destination.
4. Click on Create Sink.
4. Click on Create Sink.
5. A message box pops up to notify you of successful creation. Click on Close.
6. Click on the Play button located on the top bar to start the export.

### 2. Scheduling a BigQuery job
Use the SQL script in the file bigquery_audit_log.sql (located in this GitHub folder) to create a scheduled query in BigQuery. Click [here](https://cloud.google.com/bigquery/docs/scheduling-queries) for instructions on how to create scheduled queries.
Use the SQL script in the file bigquery_audit_log.sql (located in this GitHub folder) to create a scheduled query in BigQuery. Click [here](https://cloud.google.com/bigquery/docs/scheduling-queries) for instructions on how to create scheduled queries.

Create a materialized table that stores data from the scheduled query.
Create a materialized table that stores data from the scheduled query.
You can give it a custom name, we will be referring to it as **bigquery_audit_log**.

### 3. Copying the data source in Data Studio
Expand Down
6 changes: 3 additions & 3 deletions examples/bigquery-audit-log/bigquery_audit_log.sql
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ WITH BQAudit AS (
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.startTime, SECOND)) / 60) AS INT64)
AS executionMinuteBuckets,
IF(COALESCE(protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.totalProcessedBytes,
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.totalSlotMs,
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.totalSlotMs,
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatus.error.code) IS NULL, TRUE, FALSE
) AS isCached,
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.totalSlotMs,
Expand All @@ -60,8 +60,8 @@ WITH BQAudit AS (
WHERE
protopayload_auditlog.serviceName = 'bigquery.googleapis.com'
AND protopayload_auditlog.methodName = 'jobservice.jobcompleted'
AND protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.eventName IN
(
AND protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.eventName IN
(
'table_copy_job_completed',
'query_job_completed',
'extract_job_completed',
Expand Down
2 changes: 1 addition & 1 deletion examples/bigquery-audit-log/docs/query_jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The Selection Bar allows the user to filter the data in the report to a specific
![Selection Bar](../images/query_jobs/Image1.png)

### No. of Queries (this week vs. last week) - per day of the week
The graph displays the total number of queries run per day of the week for the current week, also displaying a contrast with the same data for the previous week.
The graph displays the total number of queries run per day of the week for the current week, also displaying a contrast with the same data for the previous week.

![No. of Queries (this week vs. last week) - per day of the week](../images/query_jobs/Image2.png)

Expand Down
12 changes: 6 additions & 6 deletions examples/bigquery-billing-dashboard/bigquery_billing_export.sql
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@
* Description:
* This SQL script transforms the billing export table to anonymize user data
* and included a linear projection for daily running cost. The output table
* powers the Cloud Billing Dashboard
* powers the Cloud Billing Dashboard
* (https://cloud.google.com/billing/docs/how-to/visualize-data).
*/

WITH
WITH
-- Generate dates in the current month.
current_month_dates AS (
SELECT gen_date
Expand All @@ -17,8 +17,8 @@ current_month_dates AS (
GENERATE_DATE_ARRAY(
DATE_TRUNC(CURRENT_DATE(), MONTH),
DATE_SUB(DATE_TRUNC(
DATE_ADD(CURRENT_DATE(), INTERVAL 1 MONTH), MONTH),
INTERVAL 1 DAY),
DATE_ADD(CURRENT_DATE(), INTERVAL 1 MONTH), MONTH),
INTERVAL 1 DAY),
INTERVAL 1 DAY)
) AS gen_date),

Expand All @@ -41,11 +41,11 @@ avg_daily_cost AS (

-- Calculate projected_running_cost
projected_cost AS (
SELECT
SELECT
daily_cost.gen_date AS date,
daily_cost.cost AS daily_cost,
avg_daily_cost.cost AS avg_daily_cost,
(DATE_DIFF(daily_cost.gen_date, DATE_TRUNC(CURRENT_DATE, MONTH), DAY) + 1) *
(DATE_DIFF(daily_cost.gen_date, DATE_TRUNC(CURRENT_DATE, MONTH), DAY) + 1) *
avg_daily_cost.cost AS projected_running_cost
FROM daily_cost
CROSS JOIN avg_daily_cost)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ def testCopyMetrics_NotCalledByCloudTasks(self):

def testCopyMetrics_MissingParameters(self):
self.app.get(
'/CopyMetrics',
'/CopyMetrics',
headers={'X-AppEngine-QueueName': 'SomeQueue'},
status=400)

Expand Down
4 changes: 2 additions & 2 deletions examples/bigquery-row-access-groups/auth_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@

def get_credentials(admin_email, scopes):
request = google.auth.transport.requests.Request()
# This retrieves the default credentials from the environment - in this
# case, for the Service Account attached to the VM. The unused _ variable
# This retrieves the default credentials from the environment - in this
# case, for the Service Account attached to the VM. The unused _ variable
# is just the GCP project ID - we're dropping it because we don't care.
default_credentials, _ = google.auth.default()
# The credentials object won't include the service account e-mail address
Expand Down
16 changes: 8 additions & 8 deletions examples/bigtable-change-key/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
## Dataflow pipeline to change the key of a Bigtable

For an optimal performance of our requests to a Bigtable instance, [it is crucial to choose
a good key for our records](https://cloud.google.com/bigtable/docs/schema-design),
so that both read and writes are evenly distributed across the keys space. Although we have tools
such as [Key Visualizer](https://cloud.google.com/bigtable/docs/keyvis-overview), to diagnose how
our key is performing, it is not obvious how to change or update a key for all the records in a table.
a good key for our records](https://cloud.google.com/bigtable/docs/schema-design),
so that both read and writes are evenly distributed across the keys space. Although we have tools
such as [Key Visualizer](https://cloud.google.com/bigtable/docs/keyvis-overview), to diagnose how
our key is performing, it is not obvious how to change or update a key for all the records in a table.

This example contains a Dataflow pipeline to read data from a table in a
Bigtable instance, and to write the same records to another table with the same
Expand All @@ -20,7 +20,7 @@ The build process is managed using Maven. To compile, just run

To create a package for the pipeline, run

`mvn package`
`mvn package`

### Setup `cbt`

Expand Down Expand Up @@ -144,7 +144,7 @@ You should see a job with a simple graph, similar to this one:

You can now check that the destination table has the same records as the input
table, and that the key has changed. You can use `cbt count` and `cbt read` for
that purpose, by comparing with the results of the original table.
that purpose, by comparing with the results of the original table.

### Change the update key function

Expand Down Expand Up @@ -176,11 +176,11 @@ It is only provided as an example so it is easier to write your own function.
```

The function has two input parameters:

* `key`: the current key of the record
* `record`: the full record, with all the column families, columns,
values/cells, versions of cells, etc.

The `record` is of type
[com.google.bigtable.v2.Row](http://googleapis.github.io/googleapis/java/all/latest/apidocs/com/google/bigtable/v2/Row.html).
You can traverse the record to recover all the elements. See [an example of how
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

usage() {
echo "Usage $0 INPUT_TABLE_NAME OUTPUT_TABLE_NAME"
echo
Expand All @@ -25,7 +25,7 @@ then
usage
exit 0
fi


if [ $# -lt 2 ]
then
Expand Down
Loading

0 comments on commit fedbbea

Please sign in to comment.