Conversation
[still dependent on apt to install pip]
modifying sda benchmark data
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| "type": "map", | ||
| "array": "astros.people", | ||
| "root": "map_astros", | ||
| "root": "map", |
There was a problem hiding this comment.
Not sure why this change is necessary - need to check.
There was a problem hiding this comment.
I ran into issue trying to run this on AWS since the task name shadows the function name
benchmarks/600.workflows/6300.sda-workflow/dev/data/sda-workflow-local.json
Outdated
Show resolved
Hide resolved
| "memory": 128, | ||
| "languages": ["python"], | ||
| "modules": ["storage"], | ||
| "container-image": "logru/sda-no-db:latest" |
There was a problem hiding this comment.
This is good but it would be great if we had a clear path to reproduce these images - @LoloGruber, can you please point me to this Dockerfile? On the main branch, we have now C++ support that gives us a way to add "dependencies" images, and I can integrate that.
There was a problem hiding this comment.
I added code to container.py to build an image by injecting aws bootstrap code to a user-defined base image. This is the related Dockerfile for AWS
There was a problem hiding this comment.
I meant the Dockerfile that builds SDA :-) I don't see it in the Dockerfile you linked.
There was a problem hiding this comment.
| import json | ||
| import os | ||
|
|
||
| class MemgraphConfig: |
There was a problem hiding this comment.
This is a comment for me - we need to integrate it internaly, just like we did with the config for Redis.
| "module_packages": {} | ||
| "module_packages": { | ||
| "storage": [ | ||
| "boto3" |
There was a problem hiding this comment.
Interesting - I guess it's a side-effect of using a custom Docker image?
There was a problem hiding this comment.
No, this is required to be able to transfer files via upload_func to S3. I think the workflows branch was previously only tested on Azure.
There was a problem hiding this comment.
We tested the workflow on all three providers.
I suspect something else: previously, functions were deployed as code packages on AWS (so AWS provides runtime with boto3) or as containers, and these containers inherit from a base image that is AWS-specific and already contains boto3. If we now support a custom container base image that is not AWS-specific, so boto3 is missing.
There was a problem hiding this comment.
Ah okay, that makes sense
sebs/benchmark.py
Outdated
| ], | ||
| is_workflow: bool, | ||
| ) -> Tuple[bool, str, bool, str]: | ||
| # Check if custom container image is specified which would collide with container deployment option |
There was a problem hiding this comment.
I'm wondering if it should not be reversed - if the benchmark specified a custom container image, then shouldn't we instead throw an error when non-container (code package) deployment is selected? Because that is a clear contradiction.
There was a problem hiding this comment.
You are correct, I didnt want to make any big changes to the code flow of SeBS, but I would also rather deny any other types of deployment if a custom image is specified
There was a problem hiding this comment.
I will just remove this check and force a container deployment when a custom image is specified
sebs/aws/aws.py
Outdated
| f'"{self.config.resources.redis_password}"', | ||
| ) | ||
|
|
||
| assert not (code_package.container_deployment and code_package.benchmark_config.container_image is not None), \ |
There was a problem hiding this comment.
As noted in a different comment - I think it should be an error to try to deploy a benchmark with custom image in the "code package" mode (non-container).
There was a problem hiding this comment.
As you see in the branching below, for me, it is a third option to deploy a function. Code-Package (Python functions as zip), Container (Python functions within image), User-defined Container (containing the Python functions and SeBS boilerplate code). I wanted to avoid the user using the flag --container-deployment and also specifying a custom base image. If you agree, I think it would be a valid approach to overrule this flag when a custom image is defined
…eployment, forcing a custom image deployment
Tasks to accomplish: