Skip to content

Commit

Permalink
Merge branch 'main' into bc/components_test
Browse files Browse the repository at this point in the history
  • Loading branch information
BenjaminCharmes authored Oct 4, 2024
2 parents 6c84925 + 39be3fb commit fef8c4c
Show file tree
Hide file tree
Showing 48 changed files with 3,926 additions and 6,408 deletions.
1 change: 0 additions & 1 deletion .github/dependabot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ updates:
day: monday
time: "05:43"
target-branch: main
versioning-strategy: lockfile-only
labels:
- dependency_updates
groups:
Expand Down
28 changes: 14 additions & 14 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,21 +26,21 @@ jobs:
with:
python-version: "3.10"

- name: Set up uv (latest)
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Set up uv
uses: astral-sh/setup-uv@v3
with:
version: "0.4.x"
enable-cache: true

- name: Install dependencies
working-directory: ./pydatalab
run: |
uv venv
uv pip install -r requirements/requirements-all-dev.txt
uv pip install -e '.[all, dev]'
uv sync --all-extras
- name: Run pre-commit
working-directory: ./pydatalab
run: |
source .venv/bin/activate
pre-commit run --all-files --show-diff-on-failure
uv run pre-commit run --all-files --show-diff-on-failure
pytest:
name: Run Python unit tests
Expand Down Expand Up @@ -74,21 +74,21 @@ jobs:
run: |
wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-ubuntu2204-x86_64-100.9.0.deb && sudo apt install ./mongodb-database-tools-*-100.9.0.deb
- name: Set up uv (latest)
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Set up uv
uses: astral-sh/setup-uv@v3
with:
version: "0.4.x"
enable-cache: true

- name: Install locked versions of dependencies
working-directory: ./pydatalab
run: |
uv venv
uv pip install -r requirements/requirements-all-dev.txt
uv pip install -e '.[all, dev]'
uv sync --all-extras
- name: Run all tests
working-directory: ./pydatalab
run: |
source .venv/bin/activate
pytest -rs -vvv --cov-report=term --cov-report=xml --cov ./pydatalab ./tests
uv run pytest -rs -vvv --cov-report=term --cov-report=xml --cov ./pydatalab ./tests
- name: Upload coverage to Codecov
if: matrix.python-version == '3.10' && github.repository == 'datalab-org/datalab'
Expand Down
5 changes: 5 additions & 0 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,11 +178,16 @@ Finally, recreate the lock files with:
```shell
uv pip compile pyproject.toml -o requirements/requirements-all-dev.txt --extra all --extra dev
uv pip compile pyproject.toml -o requirements/requirements-all.txt --extra all
uv lock
```

You should then inspect the changes to the requirements files (only your new
package and its subdependencies should have been added) and commit the changes.

> Regenerating the `Pipfile.lock` will not be necessary for long, but in this
> case it can be synced with the requirements.txt files via `pipenv install -r requirements/requirements-all-dev.txt`,
> and the resulting `Pipfile.lock` can be committed to the repository.
### Test server authentication/authorisation

There are two approaches to authentication when developing *datalab* features locally.
Expand Down
5,709 changes: 1,383 additions & 4,326 deletions pydatalab/Pipfile.lock

Large diffs are not rendered by default.

56 changes: 56 additions & 0 deletions pydatalab/pydatalab/routes/v0_1/collections.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import datetime
import json

from bson import ObjectId
from flask import Blueprint, jsonify, request
from flask_login import current_user
from pydantic import ValidationError
Expand Down Expand Up @@ -325,3 +326,58 @@ def search_collections():
]

return jsonify({"status": "success", "data": list(cursor)}), 200


@COLLECTIONS.route("/collections/<collection_id>", methods=["POST"])
def add_items_to_collection(collection_id):
data = request.get_json()
refcodes = data.get("data", {}).get("refcodes", [])

collection = flask_mongo.db.collections.find_one(
{"collection_id": collection_id, **get_default_permissions()}
)

if not collection:
return jsonify({"error": "Collection not found"}), 404

if not refcodes:
return jsonify({"error": "No item provided"}), 400

item_count = flask_mongo.db.items.count_documents(
{"refcode": {"$in": refcodes}, **get_default_permissions()}
)

if item_count == 0:
return jsonify({"error": "No matching items found"}), 404

update_result = flask_mongo.db.items.update_many(
{"refcode": {"$in": refcodes}, **get_default_permissions()},
{
"$addToSet": {
"relationships": {
"description": "Is a member of",
"relation": None,
"type": "collections",
"immutable_id": ObjectId(collection["_id"]),
"item_id": None,
"refcode": None,
}
}
},
)

if update_result.matched_count == 0:
return (jsonify({"status": "error", "message": "Unable to add to collection."}), 400)

if update_result.modified_count == 0:
return (
jsonify(
{
"status": "success",
"message": "No update was performed",
}
),
200,
)

return (jsonify({"status": "success"}), 200)
12 changes: 7 additions & 5 deletions pydatalab/pydatalab/routes/v0_1/items.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,7 @@ def get_starting_materials():
"date": 1,
"chemform": 1,
"name": 1,
"type": 1,
"chemical_purity": 1,
"supplier": 1,
"location": 1,
Expand Down Expand Up @@ -300,7 +301,8 @@ def search_items():
nresults = request.args.get("nresults", default=100, type=int)
types = request.args.get("types", default=None)
if isinstance(types, str):
types = types.split(",") # should figure out how to parse as list automatically
# should figure out how to parse as list automatically
types = types.split(",")

match_obj = {
"$text": {"$search": query},
Expand Down Expand Up @@ -428,10 +430,10 @@ def _create_sample(
raise RuntimeError("Invalid type")
model = ITEM_MODELS[type_]

## the following code was used previously to explicitely check schema properties.
## it doesn't seem to be necessary now, with extra = "ignore" turned on in the pydantic models,
## and it breaks in instances where the models use aliases (e.g., in the starting_material model)
## so we are taking it out now, but leaving this comment in case it needs to be reverted.
# the following code was used previously to explicitely check schema properties.
# it doesn't seem to be necessary now, with extra = "ignore" turned on in the pydantic models,
# and it breaks in instances where the models use aliases (e.g., in the starting_material model)
# so we are taking it out now, but leaving this comment in case it needs to be reverted.
# schema = model.schema()
# new_sample = {k: sample_dict[k] for k in schema["properties"] if k in sample_dict}
new_sample = sample_dict
Expand Down
Loading

0 comments on commit fef8c4c

Please sign in to comment.