Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It would be nice to get some customization around Volume Servers #40

Open
dmolik opened this issue Jun 10, 2021 · 8 comments
Open

It would be nice to get some customization around Volume Servers #40

dmolik opened this issue Jun 10, 2021 · 8 comments
Labels
enhancement New feature or request

Comments

@dmolik
Copy link

dmolik commented Jun 10, 2021

I think it might make sense to support DaemonSets and or StatefulSets w/ hostmounts, along with node taints and roles. Potentially supporting multiple hostmounts per DS.

Ultimately I'm not sure, I just can see a need for a more-customizable storage topology.

@thiscantbeserious
Copy link
Contributor

thiscantbeserious commented Jun 13, 2021

I'm with you in that there needs to be a better storage topology.

First and foremost this Operator as a whole needs a little bit of love to make it a better base for the future.

It's currently based on an old scafolded template that has some dependency issues and a lack of tests.

I'm currently working on a better Readme, some examples and then on a whole Cleanup of the base Makefile and Dockerbuild in Hands with Travis (possibly Multi-Arch, automated Tests e.g.).

Let me ask you: "What's the biggest advantage this Operator has?"

"It's simple"

and follows the footsteps of SeaweedFS to manage and deploy it pretty easily onto your Kubernetes cluster without a ton of configuration files.

So I'd be carefull to introduce to much configuration complexity where it isn't needed (read further, extension comes afterwards).

There's already the CSI-Driver which was introduced as a means to deeply integrate third-party storage into Kubernetes in itself - so by reusing it we would gain a ton of flexibility for more advanced use-cases and make Kubernetes properly understand SeaweedFS as well for getting managed and better Integration for other Areas (e.g. Portainer, Rancher) while also not having to apply work twice.

So before going for the route of addressing more advanced storage topology the CSI-Driver would need to be integrated if you ask me.

Like in the following order:

  • 1. Rework the base (like outlined above)
  • 2. Integrate the CSI-Driver
  • 3. Come up with a better Storage management in hands with the CSI-Driver

This is merly an idea I came up with right now ... so not sure if it's the correct path to follow.

Would love some discussion about that, especially keeping the balance inbetween extendability (only if you really need it) and simple configuation first and foremost.

@dmolik
Copy link
Author

dmolik commented Jun 13, 2021

So my use case is actually on-prem, metal clusters, and I could really use a smarter topology to make storage easier, personally I like the idea of a simple configuration, but I recognize everyone has a slightly different setup, I like to believe thoughtful defaults are a happy medium. That being said, adding a "DS" switch into the volume stanza wouldn't be a huge config change, at least user facing.

It would be kinda cool to see the operator also deploy the csi.

@thiscantbeserious
Copy link
Contributor

That being said, adding a "DS" switch into the volume stanza wouldn't be a huge config change, at least user facing.

Recognized & noted.

I'm also deploying this on-prem, bare-metal on my own built cluster at home :)

@thiscantbeserious thiscantbeserious added the enhancement New feature or request label Jun 13, 2021
@dmolik
Copy link
Author

dmolik commented Jun 13, 2021

make sense, makes sense. Would you like some help? I primarily use kubebuiler, and prefer to go down that route. But would be happy to help. I wanted to get the discussion going before I started writing code.

@chrislusf thoughts?

@chrislusf
Copy link
Collaborator

@dmolik The current code was based on open-sdk 0.19. In recent 1.x, I think kubebuilder and operator-sdk are integrating together.

@thiscantbeserious
Copy link
Contributor

thiscantbeserious commented Jun 19, 2021

@dmolik feel free to send PRs or keep the discussion going - code snippets anything that helps! Some more examples with their use-case would be really helpfull as well - so maybe share your config if you feel like it.

@ProjectInitiative
Copy link

@thiscantbeserious you mentioned you are running this at home as well, how would you handle multiple physical nodes with varying disks in each? I think the idea of hostmounts is also a nice to have!

@blampe
Copy link

blampe commented Jun 10, 2022

@thiscantbeserious you mentioned you are running this at home as well, how would you handle multiple physical nodes with varying disks in each? I think the idea of hostmounts is also a nice to have!

Also curious about this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants