-
Notifications
You must be signed in to change notification settings - Fork 2
Description
External systems like Seqera Platform want to know the set of available config profiles for a Nextflow pipeline so that they can provide a drop-down list and check for typos.
The platform currently uses a custom static analyzer to detect config profiles without loading the config (executing code). It supports the cases documented here.
I would like to remove this custom analyzer and implement it in the language server instead. It should be possible via executeCommand
:
public CompletableFuture<Object> executeCommand(ExecuteCommandParams params) { |
I think the LSP client will need to provide (1) a URL to a Nextflow pipeline repo or config file and (2) a map of launch params
Detecting config profiles in a single file is easy enough -- just look for the profiles
block and get the name of each child block. The tricky part is detecting profiles from included files.
The analyzer needs to be able to handle dynamic config includes such as ternary expressions and param references -- this logic is already in platform and can simply be ported over.
The language server also needs to be able to download included files, including remote files, which may require authentication. It may be possible to facilitate this through LSP requests -- the custom command could use some internal protocol in the request/response to allow the server and client to exchange and request information recursively:
- platform requests profiles given a URL/params
- language server responds with some request ID and provides the included URLs it needs
- platform requests again with request ID and provides requested file content
- language server does not encounter any more includes, responds with final result
Alternatively, we could enable the language server to access the same authentication layer used by platform. It would make the language server JAR bigger but maybe the requests wouldn't take as long. At the same time, if the language server and platform are running in the same network, request latency should be low anyway.