Skip to content

Commit d595ea2

Browse files
Final renaming of models to include sources
1 parent 0eeae68 commit d595ea2

File tree

13 files changed

+47
-40
lines changed

13 files changed

+47
-40
lines changed

CHANGELOG.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,11 @@ and this project adheres to
88

99
## [Unreleased]
1010

11+
- **Breaking**: Support linting of sources.
12+
- **Breaking**: `--fail_any_model_under` becomes `--fail-any-item-under` and
13+
`--fail_project_under` becomes `--fail-project-under`.
14+
- **Breaking**: `model_filter_names` becomes `rule_filter_names`.
15+
1116
## [0.6.0] - 2024-08-23
1217

1318
- **Breaking**: Improve error handling in CLI. Log messages are written in

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111

1212
## What is `dbt-score`?
1313

14-
`dbt-score` is a linter for dbt model metadata.
14+
`dbt-score` is a linter for dbt metadata.
1515

1616
[dbt][dbt] (Data Build Tool) is a great framework for creating, building,
1717
organizing, testing and documenting _data models_, i.e. data sets living in a

docs/configuration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,8 @@ The following options can be set in the `pyproject.toml` file:
5151
- `disabled_rules`: A list of rules to disable.
5252
- `fail_project_under` (default: `5.0`): If the project score is below this
5353
value the command will fail with return code 1.
54-
- `fail_any_item_under` (default: `5.0`): If any model or source scores below this value
55-
the command will fail with return code 1.
54+
- `fail_any_item_under` (default: `5.0`): If any model or source scores below
55+
this value the command will fail with return code 1.
5656

5757
#### Badges configuration
5858

@@ -70,7 +70,7 @@ All badges except `wip` can be configured with the following option:
7070

7171
- `threshold`: The threshold for the badge. A decimal number between `0.0` and
7272
`10.0` that will be used to compare to the score. The threshold is the minimum
73-
score required for a model to be rewarded with a certain badge.
73+
score required for a model or source to be rewarded with a certain badge.
7474

7575
The default values can be found in the
7676
[BadgeConfig](reference/config.md#dbt_score.config.BadgeConfig).

docs/create_rules.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Create rules
22

3-
In order to lint and score models or sources, `dbt-score` uses a set of
4-
rules that are applied to each item. A rule can pass or fail when it is run.
5-
Based on the severity of the rule, items are scored with the weighted
6-
average of the rules results. Note that `dbt-score` comes bundled with a
3+
In order to lint and score models or sources, `dbt-score` uses a set of rules
4+
that are applied to each item. A rule can pass or fail when it is run. Based on
5+
the severity of the rule, items are scored with the weighted average of the
6+
rules results. Note that `dbt-score` comes bundled with a
77
[set of default rules](rules/generic.md).
88

99
On top of the generic rules, it's possible to add your own rules. Two ways exist
@@ -31,10 +31,11 @@ The name of the function is the name of the rule and the docstring of the
3131
function is its description. Therefore, it is important to use a
3232
self-explanatory name for the function and document it well.
3333

34-
The type annotation for the rule's argument dictates whether the rule should
35-
be applied to dbt models or sources.
34+
The type annotation for the rule's argument dictates whether the rule should be
35+
applied to dbt models or sources.
3636

3737
Here is the same example rule, applied to sources:
38+
3839
```python
3940
from dbt_score import rule, RuleViolation, Source
4041

@@ -68,7 +69,7 @@ class ModelHasDescription(Rule):
6869
"""Evaluate the rule."""
6970
if not model.description:
7071
return RuleViolation(message="Model lacks a description.")
71-
72+
7273
class SourceHasDescription(Rule):
7374
description = "A source should have a description."
7475

@@ -116,8 +117,8 @@ def sql_has_reasonable_number_of_lines(model: Model, max_lines: int = 200) -> Ru
116117
### Filtering rules
117118

118119
Custom and standard rules can be configured to have filters. Filters allow
119-
models or sources to be ignored by one or multiple rules if the item doesn't satisfy
120-
the filter criteria.
120+
models or sources to be ignored by one or multiple rules if the item doesn't
121+
satisfy the filter criteria.
121122

122123
Filters are created using the same discovery mechanism and interface as custom
123124
rules, except they do not accept parameters. Similar to Python's built-in
@@ -138,7 +139,8 @@ class SkipSchemaY(RuleFilter):
138139
return model.schema.lower() != 'y'
139140
```
140141

141-
Filters also rely on type-annotations to dictate whether they apply to models or sources:
142+
Filters also rely on type-annotations to dictate whether they apply to models or
143+
sources:
142144

143145
```python
144146
from dbt_score import RuleFilter, rule_filter, Source
@@ -154,7 +156,6 @@ class SkipSourceDatabaseB(RuleFilter):
154156
return source.database.lower() != 'b'
155157
```
156158

157-
158159
Similar to setting a rule severity, standard rules can have filters set in the
159160
[configuration file](configuration.md/#tooldbt-scorerulesrule_namespacerule_name),
160161
while custom rules accept the configuration file or a decorator parameter.

docs/get_started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,8 @@ It's also possible to automatically run `dbt parse`, to generate the
4040
dbt-score lint --run-dbt-parse
4141
```
4242

43-
To lint only a selection of models, the argument `--select` can be used. It
44-
accepts any
43+
To lint only a selection of models or sources, the argument `--select` can be
44+
used. It accepts any
4545
[dbt node selection syntax](https://docs.getdbt.com/reference/node-selection/syntax):
4646

4747
```shell

docs/index.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,9 @@
22

33
`dbt-score` is a linter for [dbt](https://www.getdbt.com/) metadata.
44

5-
dbt allows data practitioners to organize their data in to _models_. Those
6-
models have metadata associated with them: documentation, tests, types, etc.
5+
dbt allows data practitioners to organize their data in to _models_ and
6+
_sources_. Those models and sources have metadata associated with them:
7+
documentation, tests, types, etc.
78

89
`dbt-score` allows to lint and score this metadata, in order to enforce (or
910
encourage) good practices.
@@ -25,15 +26,15 @@ score.
2526

2627
## Philosophy
2728

28-
dbt models are often used as metadata containers: either in YAML files or
29-
through the use of `{{ config() }}` blocks, they are associated with a lot of
29+
dbt models/sources are often used as metadata containers: either in YAML files
30+
or through the use of `{{ config() }}` blocks, they are associated with a lot of
3031
information. At scale, it becomes tedious to enforce good practices in large
31-
data teams dealing with many models.
32+
data teams dealing with many models/sources.
3233

3334
To that end, `dbt-score` has 2 main features:
3435

35-
- It runs rules on dbt models and sources, and displays any rule violations. These can be used in
36-
interactive environments or in CI.
36+
- It runs rules on dbt models and sources, and displays any rule violations.
37+
These can be used in interactive environments or in CI.
3738
- Using those run results, it scores items, to ascribe them a measure of their
3839
maturity. This score can help gamify metadata improvements/coverage, and be
3940
reflected in data catalogs.

docs/programmatic_invocations.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -61,9 +61,9 @@ When `dbt-score` terminates, it exists with one of the following exit codes:
6161
project being linted either doesn't raise any warning, or the warnings are
6262
small enough to be above the thresholds. This generally means "successful
6363
linting".
64-
- `1` in case of linting errors. This is the unhappy case: some models in the
65-
project raise enough warnings to have a score below the defined thresholds.
66-
This generally means "linting doesn't pass".
64+
- `1` in case of linting errors. This is the unhappy case: some models or
65+
sources in the project raise enough warnings to have a score below the defined
66+
thresholds. This generally means "linting doesn't pass".
6767
- `2` in case of an unexpected error. This happens for example if something is
6868
misconfigured (for example a faulty dbt project), or the wrong parameters are
6969
given to the CLI. This generally means "setup needs to be fixed".

src/dbt_score/cli.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ def cli() -> None:
8080
default=False,
8181
)
8282
@click.option(
83-
"--fail_project_under",
83+
"--fail-project-under",
8484
help="Fail if the project score is under this value.",
8585
type=float,
8686
is_flag=False,

src/dbt_score/formatters/json_formatter.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
55
```json
66
{
7-
"models": {
7+
"evaluables": {
88
"model_foo": {
99
"score": 5.0,
1010
"badge": "🥈",
@@ -60,14 +60,14 @@ class JSONFormatter(Formatter):
6060
def __init__(self, *args: Any, **kwargs: Any):
6161
"""Instantiate formatter."""
6262
super().__init__(*args, **kwargs)
63-
self._model_results: dict[str, dict[str, Any]] = {}
63+
self.evaluable_results: dict[str, dict[str, Any]] = {}
6464
self._project_results: dict[str, Any]
6565

6666
def evaluable_evaluated(
6767
self, evaluable: Evaluable, results: EvaluableResultsType, score: Score
6868
) -> None:
6969
"""Callback when an evaluable item has been evaluated."""
70-
self._model_results[evaluable.name] = {
70+
self.evaluable_results[evaluable.name] = {
7171
"score": score.value,
7272
"badge": score.badge,
7373
"pass": score.value >= self._config.fail_any_item_under,
@@ -76,19 +76,19 @@ def evaluable_evaluated(
7676
for rule, result in results.items():
7777
severity = rule.severity.name.lower()
7878
if result is None:
79-
self._model_results[evaluable.name]["results"][rule.source()] = {
79+
self.evaluable_results[evaluable.name]["results"][rule.source()] = {
8080
"result": "OK",
8181
"severity": severity,
8282
"message": None,
8383
}
8484
elif isinstance(result, RuleViolation):
85-
self._model_results[evaluable.name]["results"][rule.source()] = {
85+
self.evaluable_results[evaluable.name]["results"][rule.source()] = {
8686
"result": "WARN",
8787
"severity": severity,
8888
"message": result.message,
8989
}
9090
else:
91-
self._model_results[evaluable.name]["results"][rule.source()] = {
91+
self.evaluable_results[evaluable.name]["results"][rule.source()] = {
9292
"result": "ERR",
9393
"severity": severity,
9494
"message": str(result),
@@ -102,7 +102,7 @@ def project_evaluated(self, score: Score) -> None:
102102
"pass": score.value >= self._config.fail_project_under,
103103
}
104104
document = {
105-
"models": self._model_results,
105+
"evaluables": self.evaluable_results,
106106
"project": self._project_results,
107107
}
108108
print(json.dumps(document, indent=2, ensure_ascii=False))

src/dbt_score/lint.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
"""Lint dbt models metadata."""
1+
"""Lint dbt metadata."""
22

33
from pathlib import Path
44
from typing import Iterable, Literal

src/dbt_score/models.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ def from_raw_values(cls, raw_values: dict[str, Any]) -> "Constraint":
4444

4545
@dataclass
4646
class Test:
47-
"""Test for a column or model.
47+
"""Test for a column, model or source.
4848
4949
Attributes:
5050
name: The name of the test.
@@ -372,7 +372,7 @@ def __hash__(self) -> int:
372372

373373

374374
class ManifestLoader:
375-
"""Load the models and tests from the manifest."""
375+
"""Load the models, sources and tests from the manifest."""
376376

377377
def __init__(self, file_path: Path, select: Iterable[str] | None = None):
378378
"""Initialize the ManifestLoader.

tests/formatters/test_json_formatter.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ def test_json_formatter(
2929
assert (
3030
stdout
3131
== """{
32-
"models": {
32+
"evaluables": {
3333
"model1": {
3434
"score": 10.0,
3535
"badge": "🥇",

tests/test_cli.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ def test_fail_project_under(manifest_path):
7777
with patch("dbt_score.cli.Config._load_toml_file"):
7878
runner = CliRunner()
7979
result = runner.invoke(
80-
lint, ["--manifest", manifest_path, "--fail_project_under", "10.0"]
80+
lint, ["--manifest", manifest_path, "--fail-project-under", "10.0"]
8181
)
8282

8383
assert "model1" in result.output

0 commit comments

Comments
 (0)