Skip to content

Commit 8332efd

Browse files
openwithcodelkurija1GustavBaumgartmyungjinkasryan
authored
Development (#361)
* Add group associations to roles (#319) * Sync up the generated code from openapi generator with what we have currently (#331) * applied formatting (#341) * pre-commit setup and dev reqs (#342) * Refactor config handling with pydantic (#332) * Make sdk config backwards compatible. (#355) * Fix merge conflicts between development and main branch (#353) * optimizer compatibility with tensorflow and example for medmnist keras/pytorch (#320) Tensorflow compatibility for new optimizers was added, which included fedavg, fedadam, fedadagrad, and fedyogi. A shell script for tesing all 8 possible combinations of optimizers and frameworks is included. This allows the medmnist example to be run with keras (the folder structure was refactored to include a trainer and aggregator for keras). The typo in fedavg.py has now been fixed. * feat+fix: grpc support for hierarchical fl (#321) Hierarchical fl didn't work with grpc as backend. This is because groupby field was not considered in metaserver service and p2p backend. In addition, a middle aggregator hangs even after a job is completed. This deadlock occurs because p2p backend cleanup code is called as a part of a channel cleanup. However, in a middle aggregator, p2p backend is responsible for tasks across all channnels. The p2p cleanup code couldn't finish cleanup because a broadcast task for in the other channel can't finish. This bug is fixed here by getting the p2p backend cleanup code out side of channel cleanup code. * documenation for metaserver/mqtt local (#322) Documentation for using metaserver will allow users to run examples with a local broker. It also allows for mqtt local brokers. This decreases the chances of any job ID collisions. Modifications to the config.json for the mnist example were made in order to make it easier to switch to a local broker. The readme does indicate how to do this for other examples now. Co-authored-by: vboxuser <[email protected]> * feat: asynchronous fl (#323) Asynchronous FL is implemented for two-tier topology and three-tier hierarchical topology. The main algorithm is based on the following two papers: - https://arxiv.org/pdf/2111.04877.pdf - https://arxiv.org/pdf/2106.06639.pdf Two examples for asynchronous fl are also added. One is for a two-tier topology and the other for a three-tier hierarchical topology. This implementation includes the core algorithm but doesn't include SecAgg algorithm (presented in the papers), which is not the scope of this change. * fix+refactor: asyncfl loss divergence (#330) For asyncfl, a client (trainer) should send delta by subtracting local weights from original global weights after training. In the current implementation, the whole local weights were sent to a server (aggregator). This causes loss divergence. Supporting delta update requires refactoring of aggregators of synchronous fl (horizontal/{top_aggregator.py, middle_aggregator.py}) as well as optimizers' do() function. The changes here support delta update universally across all types of modes (horizontal synchronous, asynchronous, and hybrid). * fix: conflict bewtween integer tensor and float tensor (#335) Model architectures can have integer tensors. Applying aggregation on those tensors results in type mistmatch and throws a runtime error: "RuntimeError: result type Float can't be cast to the desired output type Long" Integer tensors don't matter in back propagation. So, as a workaround to the issue, we typecast to the original dtype when the original type is different from the dtype of weighted tensors for aggregation. In this way, we can keep the model architecture as is. * refactor: config for hybrid example in library (#334) To enable library-only execution for hybrid example, its configuration files are updated accordingly. The revised configuration has local mqtt and p2p broker config and p2p broker is selected. * misc: asynchronous hierarchical fl example (#340) Since the Flame SDK supports asynchronous FL, we add an example of an asynchronous hierarchical FL for control plane. * chore: clean up examples folder (#336) The examples folder at the top level directory has some outdated and irrelevant files. Those are now removed from the folder. * fix: workaround for hybrid mode with two p2p backends (#345) Due to grpc/grpc#25364, when two p2p backends (which rely on grpc and asyncio) are defined, the hybrid mode example throws an execption: 'BlockingIOError: [Errno 35] Resource temporarily unavailable'. The issue still appears unresolved. As a temporary workaround, we use two different types of backends: mqtt for one and p2p for the other. This means that when this example is executed, both metaserver and a mqtt broker (e.g., mosquitto) must be running in the local machine. * fix: distributed mode (#344) Distributed mode has a bug: before 'weights' is not defined as member variable, deepcopy(self.weights) in _update_weights() is called. To address this issue, self.weights is initialized in __init__(). Also, to run a distributed example locally, configuration files are revised. * example/implementation for fedprox (#339) This example is similar to the ones seen in the fedprox paper, although it currently does not simmulate stragglers and uses another dataset/architecture. A few things were changed in order for there to be a simple process for modifying trainers. This includes a function in util.py and another class variable in the trainer containing information on the client side regularizer. Additionally, tests are automated (mu=1,0.1,0.01,0.001,0) so running the example generates or modifies existing files in order to provide the propper configuration for an experiment. * Create diagnose script (#348) * Create diagnose script * Make the script executable --------- Co-authored-by: Alex Ungurean <[email protected]> * refactor+fix: configurable deployer / lib regularizer fix (#351) deployer's job template file is hard-coded, which makes it hard to use different template file at deployment time. Using different different template file is useful when underlying infrastructure is different (e.g., k8s vs knative). To support that, template folder and file is fed as config variables. Also, deployer's config info is fed as command argument, which is cumbersome. So, the config parsing part is refactored such that the info is fed as a configuration file. During the testing of deployer change, a bug in the library is identified. The fix for it is added here too. Finally, the local dns configuration in flame.sh is updated so that it can be done correctly across different linux distributions (e.g., archlinux and ubuntu). The tests for flame.sh are under archlinux and ubuntu. * Add missing merge fix * Make sdk config backwards compatible. (#355) --------- Co-authored-by: GustavBaumgart <[email protected]> Co-authored-by: Myungjin Lee <[email protected]> Co-authored-by: vboxuser <[email protected]> Co-authored-by: alexandruuBytex <[email protected]> Co-authored-by: Alex Ungurean <[email protected]> Co-authored-by: elqurio <[email protected]> * refactor: end-to-end refactoring (#360) * refactor: end-to-end refactoring The development branch is yet fully tested. Hence, it contains several incompatibility and bugs. The following issues are handled: (1) func tag parsing in config.py (sdk): The config module has a small bug, which the parsed func tags are not populated in Channel class instance. (2) design and schema creation failure (control plane): "name" field in design and "version" field in schema are not used all the time. But they are specified as "required" fields, which causes error during assertion check on these field in openapi code. (3) hyperparameter update failure in mlflow (sdk): hyperparameter is no longer a dictionary, which is an expected format from mlflow. (4) library update for new examples - asyncfl and fedprox (sdk): asyncfl and fedprox algorithms and examples were introduced outside the development branch, which caused compatibility issues. (5) control plane example update (control plane): all the example code in the control plane is outdated because of configuration parsing module changes. (6) README file update in adult and mnist_non_orchestration_mode examples (doc): these two examples are for non-orchestration mode. They will be deprecated. So, a note is added to their README file. * Update lib/python/flame/registry/mlflow.py Co-authored-by: elqurio <[email protected]> --------- Co-authored-by: openwithcode <[email protected]> Co-authored-by: elqurio <[email protected]> --------- Co-authored-by: elqurio <[email protected]> Co-authored-by: GustavBaumgart <[email protected]> Co-authored-by: Myungjin Lee <[email protected]> Co-authored-by: vboxuser <[email protected]> Co-authored-by: alexandruuBytex <[email protected]> Co-authored-by: Alex Ungurean <[email protected]>
1 parent c609448 commit 8332efd

File tree

115 files changed

+3079
-2242
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

115 files changed

+3079
-2242
lines changed

.pre-commit-config.yaml

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
repos:
2+
- repo: https://github.com/pre-commit/pre-commit-hooks
3+
rev: v4.4.0
4+
hooks:
5+
- id: check-yaml
6+
- id: check-json
7+
- id: end-of-file-fixer
8+
- id: trailing-whitespace
9+
- id: pretty-format-json
10+
args: [--no-sort-keys, --autofix, --indent=4]
11+
- id: check-added-large-files
12+
13+
- repo: https://github.com/psf/black
14+
rev: 23.1.0
15+
hooks:
16+
- id: black
17+
args: [--line-length=80, --skip-string-normalization]
18+
19+
- repo: https://github.com/pycqa/isort
20+
rev: 5.12.0
21+
hooks:
22+
- id: isort
23+
args: [--profile=black, --line-length=80]
24+
25+
- repo: https://github.com/pycqa/flake8
26+
rev: 6.0.0
27+
hooks:
28+
- id: flake8
29+
args:
30+
[
31+
"--ignore=E203,W503",
32+
--max-complexity=10,
33+
--max-line-length=80,
34+
"--select=B,C,E,F,W,T4,B9",
35+
]

api/design_api.partials.yml

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -167,6 +167,13 @@
167167
schema:
168168
type: string
169169
style: simple
170+
- in: header
171+
name: X-API-KEY
172+
schema:
173+
type: string
174+
style: simple
175+
explode: false
176+
required: true
170177
responses:
171178
'200':
172179
description: Deleted
@@ -390,11 +397,6 @@
390397
style: simple
391398
explode: false
392399
required: true
393-
requestBody:
394-
content:
395-
application/json:
396-
schema:
397-
$ref: '#/components/schemas/DesignSchema'
398400
responses:
399401
"200":
400402
description: Null response

api/design_components.partials.yml

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,6 @@ DesignInfo:
3131
type: string
3232
required:
3333
- id
34-
- name
3534
example:
3635
name: diabete predict
3736
description: Helps in quick diagnosis and prediction of diabetes among patients.
@@ -80,7 +79,6 @@ DesignSchema:
8079
items:
8180
$ref: '#/components/schemas/Connector'
8281
required:
83-
- version
8482
- name
8583
- roles
8684
- channels
@@ -127,6 +125,10 @@ Role:
127125
replica:
128126
format: int32
129127
type: integer
128+
groupAssociation:
129+
type: array
130+
items:
131+
$ref: '#/components/schemas/GroupAssociation'
130132
required:
131133
- name
132134
example:
@@ -137,6 +139,17 @@ Role:
137139
description: These are responsible to aggregate the updates from trainer nodes.
138140
replica: 2
139141

142+
#########################
143+
# GroupAssociation
144+
#########################
145+
GroupAssociation:
146+
type: object
147+
additionalProperties:
148+
type: string
149+
example:
150+
"param-channel": "red"
151+
"global-channel": "black"
152+
140153
#########################
141154
# Channel between roles
142155
#########################

cmd/controller/app/database/db_interfaces.go

Lines changed: 84 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -35,62 +35,132 @@ type DBService interface {
3535

3636
// DatasetService is an interface that defines a collection of APIs related to dataset
3737
type DatasetService interface {
38-
CreateDataset(string, openapi.DatasetInfo) (string, error)
39-
GetDatasets(string, int32) ([]openapi.DatasetInfo, error)
38+
// CreateDataset creates a new dataset in the db
39+
CreateDataset(userId string, info openapi.DatasetInfo) (string, error)
40+
41+
// GetDatasets returns a list of datasets associated with a user
42+
GetDatasets(userId string, limit int32) ([]openapi.DatasetInfo, error)
43+
44+
// GetDatasetById returns the details of a particular dataset
4045
GetDatasetById(string) (openapi.DatasetInfo, error)
4146
}
4247

4348
// DesignService is an interface that defines a collection of APIs related to design
4449
type DesignService interface {
50+
// CreateDesign adds a design to the db
4551
CreateDesign(userId string, info openapi.Design) error
52+
53+
// GetDesign returns a design associated with the given user and design ids
4654
GetDesign(userId string, designId string) (openapi.Design, error)
55+
56+
// DeleteDesign deletes the design from the db
4757
DeleteDesign(userId string, designId string) error
58+
59+
// GetDesigns returns a list of designs associated with a user
4860
GetDesigns(userId string, limit int32) ([]openapi.DesignInfo, error)
4961

62+
// CreateDesignSchema adds a schema for a design to the db
5063
CreateDesignSchema(userId string, designId string, info openapi.DesignSchema) error
64+
65+
// GetDesignSchema returns the schema of a design from the db
5166
GetDesignSchema(userId string, designId string, version string) (openapi.DesignSchema, error)
67+
68+
// GetDesignSchemas returns all the schemas associated with the given designId
5269
GetDesignSchemas(userId string, designId string) ([]openapi.DesignSchema, error)
70+
71+
// UpdateDesignSchema updates a schema for a design in the db
5372
UpdateDesignSchema(userId string, designId string, version string, info openapi.DesignSchema) error
73+
74+
// DeleteDesignSchema deletes the schema of a design from the db
5475
DeleteDesignSchema(userId string, designId string, version string) error
5576

77+
// CreateDesignCode adds the code of a design to the db
5678
CreateDesignCode(userId string, designId string, fileName string, fileVer string, fileData *os.File) error
79+
80+
// GetDesignCode retrieves the code of a design from the db
5781
GetDesignCode(userId string, designId string, version string) ([]byte, error)
82+
83+
// DeleteDesignCode deletes the code of a design from the db
5884
DeleteDesignCode(userId string, designId string, version string) error
5985
}
6086

6187
// JobService is an interface that defines a collection of APIs related to job
6288
type JobService interface {
63-
CreateJob(string, openapi.JobSpec) (openapi.JobStatus, error)
64-
DeleteJob(string, string) error
65-
GetJob(string, string) (openapi.JobSpec, error)
66-
GetJobById(string) (openapi.JobSpec, error)
67-
GetJobStatus(string, string) (openapi.JobStatus, error)
68-
GetJobs(string, int32) ([]openapi.JobStatus, error)
69-
UpdateJob(string, string, openapi.JobSpec) error
70-
UpdateJobStatus(string, string, openapi.JobStatus) error
89+
// CreateJob creates a new job
90+
CreateJob(userId string, spec openapi.JobSpec) (openapi.JobStatus, error)
91+
92+
// DeleteJob deletes a given job
93+
DeleteJob(userId string, jobId string) error
94+
95+
// GetJob gets the job associated with the provided jobId
96+
GetJob(userId string, jobId string) (openapi.JobSpec, error)
97+
98+
// GetJobById gets the job associated with the provided jobId
99+
GetJobById(jobId string) (openapi.JobSpec, error)
100+
101+
// GetJobStatus get the status of a job
102+
GetJobStatus(userId string, jobId string) (openapi.JobStatus, error)
103+
104+
// GetJobs returns the list of jobs associated with a user
105+
GetJobs(userId string, limit int32) ([]openapi.JobStatus, error)
106+
107+
// UpdateJob updates the job with the given jobId
108+
UpdateJob(userId string, jobId string, spec openapi.JobSpec) error
109+
110+
// UpdateJobStatus updates the status of a job given the user Id, job Id and the openapi.JobStatus
111+
UpdateJobStatus(userId string, jobId string, status openapi.JobStatus) error
112+
113+
// GetTaskInfo gets the information of a task given the user Id, job Id and task Id
71114
GetTaskInfo(string, string, string) (openapi.TaskInfo, error)
115+
116+
// GetTasksInfo gets the information of tasks given the user Id, job Id and a limit
72117
GetTasksInfo(string, string, int32, bool) ([]openapi.TaskInfo, error)
118+
119+
// GetTasksInfoGeneric gets the information of tasks given the user Id, job Id, limit and an option to include completed tasks
73120
GetTasksInfoGeneric(string, string, int32, bool, bool) ([]openapi.TaskInfo, error)
74121
}
75122

76123
// TaskService is an interface that defines a collection of APIs related to task
77124
type TaskService interface {
125+
// CreateTasks creates tasks given a set of objects.Task and a flag
78126
CreateTasks([]objects.Task, bool) error
127+
128+
// DeleteTasks deletes tasks given the job Id and a flag
79129
DeleteTasks(string, bool) error
130+
131+
// GetTask gets the task given the user Id, job Id and task Id
80132
GetTask(string, string, string) (map[string][]byte, error)
133+
134+
// IsOneTaskInState evaluates if one of the task is in a certain state given the job Id
81135
IsOneTaskInState(string, openapi.JobState) bool
136+
137+
// IsOneTaskInStateWithRole evaluates if one of the tasks is in a certain state and with a specific role given the job Id
82138
IsOneTaskInStateWithRole(string, openapi.JobState, string) bool
139+
140+
// MonitorTasks monitors the tasks and returns a TaskInfo channel
83141
MonitorTasks(string) (chan openapi.TaskInfo, chan error, context.CancelFunc, error)
84-
SetTaskDirtyFlag(string, bool) error
85-
UpdateTaskStateByFilter(string, openapi.JobState, map[string]interface{}) error
86-
UpdateTaskStatus(string, string, openapi.TaskStatus) error
142+
143+
// SetTaskDirtyFlag sets the dirty flag for tasks given the job Id and a flag
144+
SetTaskDirtyFlag(jobId string, dirty bool) error
145+
146+
// UpdateTaskStateByFilter updates the state of the task using a filter
147+
UpdateTaskStateByFilter(jobId string, newState openapi.JobState, userFilter map[string]interface{}) error
148+
149+
// UpdateTaskStatus updates the status of a task given the user Id, job Id, and openapi.TaskStatus
150+
UpdateTaskStatus(jobId string, taskId string, taskStatus openapi.TaskStatus) error
87151
}
88152

89153
// ComputeService is an interface that defines a collection of APIs related to computes
90154
type ComputeService interface {
155+
// RegisterCompute registers a compute given a openapi.ComputeSpec
91156
RegisterCompute(openapi.ComputeSpec) (openapi.ComputeStatus, error)
157+
158+
// GetComputeIdsByRegion gets all the compute Ids associated with a region
92159
GetComputeIdsByRegion(string) ([]string, error)
160+
161+
// GetComputeById gets the compute info given the compute Id
93162
GetComputeById(string) (openapi.ComputeSpec, error)
94-
// UpdateDeploymentStatus call replaces existing agent statuses with received statuses in collection.
163+
164+
// UpdateDeploymentStatus updates the deployment status given the compute Id, job Id and agentStatuses map
95165
UpdateDeploymentStatus(computeId string, jobId string, agentStatuses map[string]openapi.AgentState) error
96166
}

cmd/controller/app/database/mongodb/compute_test.go

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,9 +34,8 @@ func TestMongoService_UpdateDeploymentStatus(t *testing.T) {
3434
deploymentCollection: mt.Coll,
3535
}
3636
mt.AddMockResponses(mtest.CreateSuccessResponse(), mtest.CreateCursorResponse(1, "flame.deployment", mtest.FirstBatch, bson.D{}))
37-
err := db.UpdateDeploymentStatus("test jobid","test compute id",
38-
map[string]openapi.AgentState{"test task id": openapi.AGENT_DEPLOY_SUCCESS,
39-
})
37+
err := db.UpdateDeploymentStatus("test jobid", "test compute id",
38+
map[string]openapi.AgentState{"test task id": openapi.AGENT_DEPLOY_SUCCESS})
4039
assert.Nil(t, err)
4140
})
4241
mt.Run("status update failure", func(mt *mtest.T) {

cmd/controller/app/job/builder.go

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -171,19 +171,19 @@ func (b *JobBuilder) getTaskTemplates() ([]string, map[string]*taskTemplate) {
171171

172172
for _, role := range b.schema.Roles {
173173
template := &taskTemplate{}
174-
JobConfig := &template.JobConfig
174+
jobConfig := &template.JobConfig
175175

176-
JobConfig.Configure(b.jobSpec, b.jobParams.Brokers, b.jobParams.Registry, role, b.schema.Channels)
176+
jobConfig.Configure(b.jobSpec, b.jobParams.Brokers, b.jobParams.Registry, role, b.schema.Channels)
177177

178-
// check channels and set default group if channels don't have groupby attributes set
179-
for i := range JobConfig.Channels {
180-
if len(JobConfig.Channels[i].GroupBy.Value) > 0 {
178+
// check channels and set default group if channels don't have groupBy attributes set
179+
for i := range jobConfig.Channels {
180+
if len(jobConfig.Channels[i].GroupBy.Value) > 0 {
181181
continue
182182
}
183183

184-
// since there is no groupby attribute, set default
185-
JobConfig.Channels[i].GroupBy.Type = groupByTypeTag
186-
JobConfig.Channels[i].GroupBy.Value = append(JobConfig.Channels[i].GroupBy.Value, defaultGroup)
184+
// since there is no groupBy attribute, set default
185+
jobConfig.Channels[i].GroupBy.Type = groupByTypeTag
186+
jobConfig.Channels[i].GroupBy.Value = append(jobConfig.Channels[i].GroupBy.Value, defaultGroup)
187187
}
188188

189189
template.isDataConsumer = role.IsDataConsumer
@@ -192,7 +192,7 @@ func (b *JobBuilder) getTaskTemplates() ([]string, map[string]*taskTemplate) {
192192
}
193193
template.ZippedCode = b.roleCode[role.Name]
194194
template.Role = role.Name
195-
template.JobId = JobConfig.Job.Id
195+
template.JobId = jobConfig.Job.Id
196196

197197
templates[role.Name] = template
198198
}
@@ -205,9 +205,9 @@ func (b *JobBuilder) preCheck(dataRoles []string, templates map[string]*taskTemp
205205
// This function will evolve as more invariants are defined
206206
// Before processing templates, the following invariants should be met:
207207
// 1. At least one data consumer role should be defined.
208-
// 2. a role shouled be associated with a code.
208+
// 2. a role should be associated with a code.
209209
// 3. template should be connected.
210-
// 4. when graph traversal starts at a data role template, the depth of groupby tag
210+
// 4. when graph traversal starts at a data role template, the depth of groupBy tag
211211
// should strictly decrease from one channel to another.
212212
// 5. two different data roles cannot be connected directly.
213213

cmd/controller/app/objects/task.go

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ type JobConfig struct {
5050
Job JobIdName `json:"job"`
5151
Role string `json:"role"`
5252
Realm string `json:"realm"`
53+
Groups map[string]string `json:"groups"`
5354
Channels []openapi.Channel `json:"channels"`
5455

5556
MaxRunTime int32 `json:"maxRunTime,omitempty"`
@@ -101,6 +102,22 @@ func (cfg *JobConfig) Configure(jobSpec *openapi.JobSpec, brokers []config.Broke
101102
// Realm will be updated when datasets are handled
102103
cfg.Realm = ""
103104
cfg.Channels = cfg.extractChannels(role.Name, channels)
105+
106+
// configure the groups of the job based on the groups associated with the assigned role
107+
cfg.Groups = cfg.extractGroups(role.GroupAssociation)
108+
}
109+
110+
// extractGroups - extracts the associated groups that a given role has of a particular job
111+
func (cfg *JobConfig) extractGroups(groupAssociation []map[string]string) map[string]string {
112+
groups := make(map[string]string)
113+
114+
for _, ag := range groupAssociation {
115+
for key, value := range ag {
116+
groups[key] = value
117+
}
118+
}
119+
120+
return groups
104121
}
105122

106123
func (cfg *JobConfig) extractChannels(role string, channels []openapi.Channel) []openapi.Channel {

cmd/deployer/app/resource_handler.go

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ import (
3636
"github.com/cisco-open/flame/cmd/deployer/app/deployer"
3737
"github.com/cisco-open/flame/cmd/deployer/config"
3838
"github.com/cisco-open/flame/pkg/openapi"
39+
"github.com/cisco-open/flame/pkg/openapi/constants"
3940
pbNotify "github.com/cisco-open/flame/pkg/proto/notification"
4041
"github.com/cisco-open/flame/pkg/restapi"
4142
"github.com/cisco-open/flame/pkg/util"
@@ -279,8 +280,8 @@ func (r *resourceHandler) postDeploymentStatus(jobId string, taskStatuses map[st
279280
taskStatuses, r.spec.ComputeId, jobId)
280281
// construct url
281282
uriMap := map[string]string{
282-
"computeId": r.spec.ComputeId,
283-
"jobId": jobId,
283+
constants.ParamComputeID: r.spec.ComputeId,
284+
constants.ParamJobID: jobId,
284285
}
285286
url := restapi.CreateURL(r.apiserverEp, restapi.PutDeploymentStatusEndpoint, uriMap)
286287

@@ -296,8 +297,8 @@ func (r *resourceHandler) getDeploymentConfig(jobId string) (openapi.DeploymentC
296297
zap.S().Infof("Sending request to apiserver / controller to get deployment config")
297298
// construct url
298299
uriMap := map[string]string{
299-
"computeId": r.spec.ComputeId,
300-
"jobId": jobId,
300+
constants.ParamComputeID: r.spec.ComputeId,
301+
constants.ParamJobID: jobId,
301302
}
302303
url := restapi.CreateURL(r.apiserverEp, restapi.GetDeploymentConfigEndpoint, uriMap)
303304
code, respBody, err := restapi.HTTPGet(url)
@@ -355,9 +356,9 @@ func (r *resourceHandler) deployResources(deploymentConfig openapi.DeploymentCon
355356
}
356357

357358
ctx := map[string]string{
358-
"imageLoc": deploymentConfig.ImageLoc,
359-
"taskId": taskId,
360-
"taskKey": deploymentConfig.AgentKVs[taskId],
359+
"imageLoc": deploymentConfig.ImageLoc,
360+
constants.ParamTaskID: taskId,
361+
"taskKey": deploymentConfig.AgentKVs[taskId],
361362
}
362363

363364
rendered, renderErr := mustache.RenderFile(r.jobTemplatePath, &ctx)

0 commit comments

Comments
 (0)