Description
Is your feature request related to a problem? Please describe.
PROBLEM: As an AI_ML persona of end user using the flamectl commands I have issues in doing certain HP parameter tuning when i run the flib in an IOT Hardware platform IR1101 usung minikube.
IR1101 Platform spec: https://www.cisco.com/c/en/us/products/collateral/routers/1101-industrial-integrated-services-router/datasheet-c78-741709.html
SUGGESTION:
Since FLAME is supporting workloads to run in a de-coupled manner with the data plane workload running in edge nodes, its important to support Hyper Parameter tuning for DL in an optimum manner. In this specific context, is there a plan to support any API level integration with MLOps tools like 1) Kubeflow and 2) If support for Kubeflow is there/planned in roadmap then can we have an API from FLAME control plane/API server to tweak/parametrize Kubelfow Katib component for HP tuning to improve model efficacy ?
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A: Optimum parameter tuning within a short time Convergence.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
A: I know that FLAME leverages MLFLOW but any other API based solution integration to existing MLops tools like Kubeflow or Airflow or others ?
Additional context
Add any other context or screenshots about the feature request here.
A: To make the day to day routine work /life of an AI_ML engineer persona of end user easier when they use FLAME project.