Skip to content

Support running BinderHub on K8s without Docker #1513

@manics

Description

@manics

Proposed change

Docker has been removed from several K8s distributions. In addition there have been requests to run BinderHub on more restricted K8s distributions such as OpenShift https://discourse.jupyter.org/t/unable-to-attach-or-mount-volumes-unmounted-volumes-dockersocket-host/14950

Alternative options

Do nothing, though in future we may need to modify the deployment instructions to ensure Docker is available on the K8s hosts.

Who would use this feature?

Someone who wants to run BinderHub on K8s without Docker.
Someone who wants to run BinderHub with reduced privileges.

(Optional): Suggest a solution

There are several non-Docker container builders available, include:

repo2podman already works https://github.com/manics/repo2podman and it shouldn't be too hard to swap-in one of the other builders.

In theory it should be possible to run these without full privileges, with limited added capabilities, e.g.

So far I've managed to get a proof-of-concept podman builder running using full privileges, supported by #1512 on AWS EKS:

image:
  name: docker.io/manics/binderhub-dev
  tag: 2022-07-25-20-00

registry:
  url: docker.io
  username: <username>
  password: <password>

service:
  type: ClusterIP

config:
  BinderHub:
    base_url: /binder/
    build_capabilities:
      - privileged
    build_docker_host: ""
    build_image: "ghcr.io/manics/repo2podman:main"
    hub_url: /jupyter/
    hub_url_local: http://hub:8081/jupyter/
    image_prefix: <username>/binder-
    auth_enabled: false
    use_registry: true
  Application:
    log_level: DEBUG

extraConfig:
  0-repo2podman: |
    from binderhub.build import Build
    class Repo2PodmanBuild(Build):
        def get_r2d_cmd_options(self):
            return ["--engine=podman"] + super().get_r2d_cmd_options()
    c.BinderHub.build_class = Repo2PodmanBuild

jupyterhub:
  hub:
    baseUrl: /jupyter
    networkPolicy:
      enabled: false
  proxy:
    service:
      type: ClusterIP
    chp:
      networkPolicy:
        enabled: false
  scheduling:
    userScheduler:
      enabled: false
  ingress:
    enabled: true
    pathSuffix: "*"
    pathType: ImplementationSpecific
    # https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/
    annotations:
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/group.name: binder
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/scheme: internet-facing


ingress:
  enabled: true
  pathSuffix: "binder/*"
  pathType: ImplementationSpecific
  # https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/group.name: binder
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/scheme: internet-facing

There are several limitations:

  • Still requires a privileged container
  • No caching since it's not connecting to an external Docker daemon (probably need a host volume mount for the container store)
  • Docker registry is playing up, not sure if that's related or something else

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions