Skip to content

Commit 9cf51b1

Browse files
committed
remove int declarations if round is already used
2 parents 21807bc + 1b4056a commit 9cf51b1

35 files changed

+325
-277
lines changed

.github/ISSUE_TEMPLATE/bug_report.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@ ______________________________________________________________________
22

33
name: Bug report
44
about: Create a bug report to help us improve
5-
title: "\[BUG_TITLE\]"
5+
title: "[BUG_TITLE]"
66
labels: bug
77
assignees: ''
88

99
______________________________________________________________________
1010

11-
# \[BUG_TITLE\]
11+
# [BUG_TITLE]
1212

1313
## Description
1414

.pre-commit-config.yaml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
repos:
22
- repo: https://github.com/astral-sh/ruff-pre-commit
3-
rev: v0.9.9
3+
rev: v0.14.5
44
hooks:
55
- id: ruff
66
args: [--fix, --exit-non-zero-on-fix]
@@ -9,18 +9,18 @@ repos:
99
types_or: [python, pyi, jupyter]
1010

1111
- repo: https://github.com/PyCQA/docformatter
12-
rev: v1.7.5
12+
rev: v1.7.7
1313
hooks:
1414
- id: docformatter
1515
additional_dependencies: [tomli]
1616
args: [--in-place, --black, --style=epytext]
1717

1818
- repo: https://github.com/executablebooks/mdformat
19-
rev: 0.7.10
19+
rev: 1.0.0
2020
hooks:
2121
- id: mdformat
2222
additional_dependencies:
23-
- mdformat-gfm==0.3.6
23+
- mdformat-gfm==1.0.0
2424

2525
- repo: https://github.com/ComPWA/taplo-pre-commit
2626
rev: v0.9.3
@@ -29,7 +29,7 @@ repos:
2929
- id: taplo-format
3030

3131
- repo: https://github.com/pre-commit/pre-commit-hooks
32-
rev: v5.0.0
32+
rev: v6.0.0
3333
hooks:
3434
- id: trailing-whitespace
3535
- id: check-docstring-first

CONTRIBUTING.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Install the development dependencies by running `pip install -r requirements-dev
2929
pip install -e .[dev]
3030
```
3131

32-
> \[!NOTE\]
32+
> [!NOTE]
3333
> This will install the package in editable mode (`-e`),
3434
> so you can make changes to the code and run them immediately.
3535
@@ -90,15 +90,15 @@ pytest tests --cov=luxonis_ml --cov-report=html -n auto
9090

9191
This command will run all tests in parallel (`-n auto`) and will generate an HTML coverage report.
9292

93-
> \[!TIP\]
93+
> [!TIP]
9494
> The coverage report will be saved to `htmlcov` directory.
9595
> If you want to inspect the coverage in more detail, open `htmlcov/index.html` in a browser.
9696
97-
> \[!IMPORTANT\]
97+
> [!IMPORTANT]
9898
> If a new feature is added, a new test should be added to cover it.
9999
> There is no minimum coverage requirement for now, but minimal coverage will be enforced in the future.
100100
101-
> \[!IMPORTANT\]
101+
> [!IMPORTANT]
102102
> All tests must be passing using the `-n auto` flag before merging a PR.
103103
104104
## GitHub Actions
@@ -108,10 +108,10 @@ Our GitHub Actions workflow is run when a new PR is opened.
108108
1. First, the [pre-commit](#pre-commit-hooks) hooks must pass and the [documentation](#documentation) must be built successfully.
109109
1. If all previous checks pass, the [tests](#tests) are run.
110110

111-
> \[!TIP\]
111+
> [!TIP]
112112
> Review the GitHub Actions output if your PR fails.
113113
114-
> \[!IMPORTANT\]
114+
> [!IMPORTANT]
115115
> Successful completion of all the workflow checks is required for merging a PR.
116116
117117
## Making and Reviewing Changes

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ Additional dependencies for working with specific cloud services can be installe
4848
- `roboflow`: Dependencies for downloading datasets from Roboflow
4949
- `mlflow`: Dependencies for working with MLFlow
5050

51-
> \[!NOTE\]
51+
> [!NOTE]
5252
> If some of the additional dependencies are required but not installed (_e.g._ attempting to use Google Cloud Storage without installing the `gcs` extra), then the missing dependencies will be installed automatically.
5353
5454
**Example**:

luxonis_ml/__init__.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,9 @@
1-
__version__ = "0.8.0"
1+
from typing import Final
2+
3+
from pydantic_extra_types.semantic_version import SemanticVersion
4+
5+
__version__: Final[str] = "0.8.1"
6+
__semver__: Final[SemanticVersion] = SemanticVersion.parse(__version__)
27

38
import os
49

luxonis_ml/data/README.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
LuxonisML Data is a library for creating and interacting with datasets in the LuxonisDataFormat (LDF).
66

7-
> \[!NOTE\]
7+
> [!NOTE]
88
> For hands-on examples of how to prepare and iteract with `LuxonisML` datasets, check out [this guide](https://github.com/luxonis/ai-tutorials/tree/main/training#%EF%B8%8F-prepare-data-using-luxonis-ml).
99
1010
The lifecycle of an LDF dataset is as follows:
@@ -77,7 +77,7 @@ You can create as many datasets as you want, each with a unique name.
7777

7878
Datasets can be stored locally or in one of the supported cloud storage providers.
7979

80-
> \[!NOTE\]
80+
> [!NOTE]
8181
> 📚 For a complete list of all parameters and methods of the `LuxonisDataset` class, see the [datasets README.md](datasets/README.md).
8282
8383
### Dataset Creation
@@ -92,10 +92,10 @@ dataset_name = "parking_lot"
9292
dataset = LuxonisDataset(dataset_name)
9393
```
9494

95-
> \[!NOTE\]
95+
> [!NOTE]
9696
> By default, the dataset will be created locally. For more information on creating a remote dataset, see [this section](datasets/README.md#creating-a-dataset-remotely).
9797
98-
> \[!NOTE\]
98+
> [!NOTE]
9999
> If there already is a dataset with the same name, it will be loaded instead of creating a new one.
100100
> If you want to always create a new dataset, you can pass `delete_local=True` to the `LuxonisDataset` constructor.\
101101
> For detailed information about how the luxonis-ml dataset is stored in both local and remote storage, please check the [datasets README.md](datasets/README.md#in-depth-explanation-of-luxonis-ml-dataset-storage)
@@ -254,7 +254,7 @@ Once you've defined your data source, pass it to the dataset's add method:
254254
dataset.add(generator())
255255
```
256256

257-
> \[!NOTE\]
257+
> [!NOTE]
258258
> The `add` method accepts any iterable, not only generators.
259259
260260
### Defining Splits
@@ -291,7 +291,7 @@ Calling `make_splits` with no arguments will default to an 80/10/10 split.
291291
In order for splits to be created, there must be some new data in the dataset. If no new data were added, calling `make_splits` will raise an error.
292292
If you wish to delete old splits and create new ones using all the data, pass `redefine_splits=True` to the method call.
293293

294-
> \[!NOTE\]
294+
> [!NOTE]
295295
> There are no restrictions on the split names,
296296
> however for most cases one should stick to `"train"`, `"val"`, and `"test"`.
297297
@@ -338,8 +338,8 @@ The available commands are:
338338
- `luxonis_ml data ls` - lists all datasets
339339
- `luxonis_ml data info <dataset_name>` - prints information about the dataset
340340
- `luxonis_ml data inspect <dataset_name>` - renders the data in the dataset on screen using `cv2`
341-
- `luxonis_ml data health <dataset_name>` - checks the health of the dataset and logs and renders dataset statistics
342-
- `luxonis_ml data sanitize <dataset_name>` - removes duplicate files and duplicate annotations from the dataset
341+
- `luxonis_ml data health <dataset_name>` - checks the health of the dataset and logs and renders dataset statistics
342+
- `luxonis_ml data sanitize <dataset_name>` - removes duplicate files and duplicate annotations from the dataset
343343
- `luxonis_ml data delete <dataset_name>` - deletes the dataset
344344
- `luxonis_ml data export <dataset_name>` - exports the dataset to a chosen format and directory
345345
- `luxonis_ml data push <dataset_name>` - pushes local dataset to remote storage
@@ -357,7 +357,7 @@ This guide covers the loading of datasets using the `LuxonisLoader` class.
357357

358358
The `LuxonisLoader` class can also take care of data augmentation, for more info see [Augmentation](#augmentation).
359359

360-
> \[!NOTE\]
360+
> [!NOTE]
361361
> 📚 For a complete list of all parameters of the `LuxonisLoader` class, see the [loaders README.md](loaders/README.md).
362362
363363
### Dataset Loading
@@ -609,7 +609,7 @@ The directory can also be a zip file containing the dataset.
609609
The `task_name` argument can be specified as a single string or as a dictionary. If a string is provided, it will be used as the task name for all records.
610610
Alternatively, you can provide a dictionary that maps class names to task names for better dataset organization. See the example below.
611611

612-
> \[!NOTE\]
612+
> [!NOTE]
613613
> 📚 For a complete list of all parameters of the `LuxonisParser` class, see the [parsers README.md](parsers/README.md).
614614
615615
```python
@@ -664,7 +664,7 @@ A single class label for the entire image.
664664
}
665665
```
666666

667-
> \[!NOTE\]
667+
> [!NOTE]
668668
> The `classification` task is always added to the dataset.
669669

670670
### Bounding Box
@@ -794,10 +794,10 @@ The `counts` field contains either a **compressed byte string** or an **uncompre
794794

795795
```
796796

797-
> \[!NOTE\]
797+
> [!NOTE]
798798
> The RLE format is not intended for regular use and is provided mainly to support datasets that may already be in this format.
799799

800-
> \[!NOTE\]
800+
> [!NOTE]
801801
> Masks provided as numpy arrays are converted to RLE format internally.
802802

803803
### Array
@@ -993,7 +993,7 @@ The following example demonstrates a simple augmentation pipeline:
993993

994994
```
995995

996-
> \[!NOTE\]
996+
> [!NOTE]
997997
> The augmentations are **not** applied in order. Instead, an optimal order is determined based on the type of the augmentations to minimize the computational cost.
998998

999999
### Usage with LuxonisLoader

luxonis_ml/data/datasets/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@ dataset.make_splits((0.8, 0.1, 0.1))
230230

231231
[A remote dataset functions similarly to a local dataset](#in-depth-explanation-of-luxonis-ml-dataset-storage). When a remote dataset is created, the same folder structure appears locally, and the equivalent structure appears in the cloud. The media folder is empty locally but is filled with images on the remote storage, where filenames become UUIDs with the appropriate suffix.
232232

233-
> \[!NOTE\]
233+
> [!NOTE]
234234
> **IMPORTANT:** Be careful when creating a remote dataset with the same name as an already existing local dataset, because corruption of datasets may occur if not handled properly.
235235
>
236236
> Use `delete_local=True` and `delete_remote=True` to create a new dataset (deleting both local and remote storage) before calling `dataset.add()`, or use `dataset.push_to_cloud()` to push an existing local dataset to the cloud. To append data to an existing dataset using `dataset.add()`, keep `delete_local=False` and `delete_remote=False`. In that case, ensure both local and remote datasets are healthy. If the local dataset might be corrupted but the remote version is healthy, use `delete_local=True` and `delete_remote=False` so that the local dataset is deleted, while the remote stays intact.

luxonis_ml/data/datasets/luxonis_dataset.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@
3636
)
3737
from luxonis_ml.data.exporters.exporter_utils import (
3838
ExporterSpec,
39-
ExporterUtils,
39+
create_zip_output,
4040
)
4141
from luxonis_ml.data.utils import (
4242
BucketStorage,
@@ -1530,7 +1530,7 @@ def export(
15301530
"skeletons": getattr(self.metadata, "skeletons", None),
15311531
},
15321532
),
1533-
DatasetType.YOLOV8: ExporterSpec(YoloV8Exporter, {}),
1533+
DatasetType.YOLOV8BOUNDINGBOX: ExporterSpec(YoloV8Exporter, {}),
15341534
DatasetType.YOLOV8INSTANCESEGMENTATION: ExporterSpec(
15351535
YoloV8InstanceSegmentationExporter, {}
15361536
),
@@ -1573,7 +1573,7 @@ def export(
15731573
self.identifier, out_path, max_partition_size_gb, **spec.kwargs
15741574
)
15751575

1576-
exporter.transform(prepared_ldf=prepared_ldf)
1576+
exporter.export(prepared_ldf=prepared_ldf)
15771577

15781578
# Detect whether partitioned export was produced and the max part index
15791579
def _detect_last_part(base: Path, ds_id: str) -> int | None:
@@ -1593,7 +1593,7 @@ def _detect_last_part(base: Path, ds_id: str) -> int | None:
15931593
last_part = _detect_last_part(out_path, self.identifier)
15941594

15951595
if zip_output:
1596-
archives = ExporterUtils.create_zip_output(
1596+
archives = create_zip_output(
15971597
max_partition_size=max_partition_size_gb,
15981598
output_path=out_path,
15991599
part=last_part,

luxonis_ml/data/datasets/utils.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ def get_file(
1616
remote_path: PosixPathType,
1717
local_path: PathType,
1818
mlflow_instance: ModuleType | None = ...,
19-
default: Literal[None] = ...,
19+
default: None = ...,
2020
) -> Path | None: ...
2121

2222

@@ -126,7 +126,7 @@ def get_dir(
126126
local_dir: PathType,
127127
mlflow_instance: ModuleType | None = ...,
128128
*,
129-
default: Literal[None] = None,
129+
default: None = None,
130130
) -> Path | None: ...
131131

132132

luxonis_ml/data/exporters/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
from .voc_exporter import VOCExporter
1313
from .yolov4_exporter import YoloV4Exporter
1414
from .yolov6_exporter import YoloV6Exporter
15-
from .yolov8_exporter import YoloV8Exporter
15+
from .yolov8_bbox_exporter import YoloV8Exporter
1616
from .yolov8_instance_segmentation_exporter import (
1717
YoloV8InstanceSegmentationExporter,
1818
)

0 commit comments

Comments
 (0)